8
Hybrid GMDH-type modeling for nonlinear systems: Synergism to intelligent identification Dongwon Kim a,c, * , Sam-Jun Seo b , Gwi-Tae Park c a Department of Electrical Engineering and Computer Sciences, University of California Berkeley, CA 94720, United States b Department of Electrical and Electronic Engineering, Anyang University, Republic of Korea c Department of Electrical Engineering, Korea University, Republic of Korea article info Article history: Received 14 July 2006 Received in revised form 16 November 2007 Accepted 30 January 2009 Available online 1 May 2009 Keywords: Hybrid GMDH-type algorithm Neural networks SOPNN Heuristic approximation System identification abstract This paper presents a novel hybrid GMDH-type algorithm which combines neural networks (NNs) with an approximation scheme (self-organizing polynomial neural network: SOPNN). This composite structure is developed to establish a new heuristic approximation method for identification of nonlinear static sys- tems. NNs have been widely employed to process modeling and control because of their approximation capabilities. And SOPNN is an analysis technique for identifying nonlinear relationships between the inputs and outputs of such systems and builds hierarchical polynomial regressions of required complex- ity. Therefore, the combined model can harmonize NNs with SOPNN and find a workable synergistic envi- ronment. Simulation results of the nonlinear static system are provided to show that the proposed method is much more accurate than other modeling methods. Thus, it can be considered for efficient sys- tem identification methodology. Ó 2009 Elsevier Ltd. All rights reserved. 1. Introduction Neural network (NN) approaches can be used function approx- imation because the models are universal approximators [1]. Knowledge is automatically acquired by learning algorithms such as the back propagation (BP) algorithm in the model [2]. Thus, NN models are useful and efficient, particularly in problems in which the characteristics of processes are difficult to describe by using physical equations. Another efficient modeling technique is the group method of data handling (GMDH)-type algorithm [3–10]. The GMDH algo- rithm [3] introduced by Ivakhnenko in the early 1970’s is an anal- ysis technique for identifying the nonlinear relationships between the inputs and outputs of a given system. As a revised version, sev- eral types of GMDH-type algorithms were developed by Kondo [8– 10] to identify problems that involve a complex nonlinear system and require good prediction accuracy. One of the GMDH-type algo- rithms is the self-organizing polynomial neural network (SOPNN) [13,14]. The SOPNN automatically selects the essential input vari- ables and builds hierarchical polynomial regressions, partial description (PD), of required complexity. In addition, a high-order regression often leads to a severely ill-conditioned system of equa- tions. However, the SOPNN avoids this problem by constantly eliminating variables at each layer. Therefore, complex systems can be modeled without specific knowledge of the system or mas- sive amount of data. For the model with high approximation capability used in the system identification, understanding, and predicting undeter- mined behaviors of nonlinear systems, a lot of attention has been directed to developing advanced techniques. Therefore, to meet these requests, good pieces of work should be investigated contin- ually and they should be applied in modeling nonlinear system. Several hybrid models involved GMDH-type algorithm are introduced and modified in [21–30]. With regards to the hybrid network, the authors elaborated on two structures. In the former ones [21,24,25], called the polynomial and fuzzy polynomial neu- ron based SOPNN, processing element forming a PD are considered as polynomial regression equations and fuzzy systems. Two cases are suggested, basic and modified type, for the polynomial based SOPNN and four cases are also considered for the fuzzy based SOP- NN according to the topologies of the fuzzy system. In the latter ones [22,23,26–30], called self-organizing neuro-fuzzy network, fuzzy neural network (FNN) is considered as premise part of the hybrid network. The employed FNN can be identified as several types as genetic optimization process is applied or not, what con- nection point is used for SOPNN, basic type or modified type. To get the high level of accuracy, these hybrid networks tend to generate overly complex architecture and several heuristics (structure types, design parameters) affecting the quality of the resulting models should be amended and predetermined. This requires that 0965-9978/$ - see front matter Ó 2009 Elsevier Ltd. All rights reserved. doi:10.1016/j.advengsoft.2009.01.029 * Corresponding author. Address: Department of Electrical Engineering and Computer Sciences, University of California Berkeley, CA 94720, United States. E-mail address: [email protected] (D. Kim). Advances in Engineering Software 40 (2009) 1087–1094 Contents lists available at ScienceDirect Advances in Engineering Software journal homepage: www.elsevier.com/locate/advengsoft

Hybrid GMDH-type modeling for nonlinear systems: Synergism to intelligent identification

Embed Size (px)

Citation preview

Page 1: Hybrid GMDH-type modeling for nonlinear systems: Synergism to intelligent identification

Advances in Engineering Software 40 (2009) 1087–1094

Contents lists available at ScienceDirect

Advances in Engineering Software

journal homepage: www.elsevier .com/locate /advengsoft

Hybrid GMDH-type modeling for nonlinear systems:Synergism to intelligent identification

Dongwon Kim a,c,*, Sam-Jun Seo b, Gwi-Tae Park c

a Department of Electrical Engineering and Computer Sciences, University of California Berkeley, CA 94720, United Statesb Department of Electrical and Electronic Engineering, Anyang University, Republic of Koreac Department of Electrical Engineering, Korea University, Republic of Korea

a r t i c l e i n f o a b s t r a c t

Article history:Received 14 July 2006Received in revised form 16 November 2007Accepted 30 January 2009Available online 1 May 2009

Keywords:Hybrid GMDH-type algorithmNeural networksSOPNNHeuristic approximationSystem identification

0965-9978/$ - see front matter � 2009 Elsevier Ltd. Adoi:10.1016/j.advengsoft.2009.01.029

* Corresponding author. Address: Department ofComputer Sciences, University of California Berkeley,

E-mail address: [email protected] (D. Kim).

This paper presents a novel hybrid GMDH-type algorithm which combines neural networks (NNs) withan approximation scheme (self-organizing polynomial neural network: SOPNN). This composite structureis developed to establish a new heuristic approximation method for identification of nonlinear static sys-tems. NNs have been widely employed to process modeling and control because of their approximationcapabilities. And SOPNN is an analysis technique for identifying nonlinear relationships between theinputs and outputs of such systems and builds hierarchical polynomial regressions of required complex-ity. Therefore, the combined model can harmonize NNs with SOPNN and find a workable synergistic envi-ronment. Simulation results of the nonlinear static system are provided to show that the proposedmethod is much more accurate than other modeling methods. Thus, it can be considered for efficient sys-tem identification methodology.

� 2009 Elsevier Ltd. All rights reserved.

1. Introduction

Neural network (NN) approaches can be used function approx-imation because the models are universal approximators [1].Knowledge is automatically acquired by learning algorithms suchas the back propagation (BP) algorithm in the model [2]. Thus,NN models are useful and efficient, particularly in problems inwhich the characteristics of processes are difficult to describe byusing physical equations.

Another efficient modeling technique is the group method ofdata handling (GMDH)-type algorithm [3–10]. The GMDH algo-rithm [3] introduced by Ivakhnenko in the early 1970’s is an anal-ysis technique for identifying the nonlinear relationships betweenthe inputs and outputs of a given system. As a revised version, sev-eral types of GMDH-type algorithms were developed by Kondo [8–10] to identify problems that involve a complex nonlinear systemand require good prediction accuracy. One of the GMDH-type algo-rithms is the self-organizing polynomial neural network (SOPNN)[13,14]. The SOPNN automatically selects the essential input vari-ables and builds hierarchical polynomial regressions, partialdescription (PD), of required complexity. In addition, a high-orderregression often leads to a severely ill-conditioned system of equa-tions. However, the SOPNN avoids this problem by constantly

ll rights reserved.

Electrical Engineering andCA 94720, United States.

eliminating variables at each layer. Therefore, complex systemscan be modeled without specific knowledge of the system or mas-sive amount of data.

For the model with high approximation capability used in thesystem identification, understanding, and predicting undeter-mined behaviors of nonlinear systems, a lot of attention has beendirected to developing advanced techniques. Therefore, to meetthese requests, good pieces of work should be investigated contin-ually and they should be applied in modeling nonlinear system.

Several hybrid models involved GMDH-type algorithm areintroduced and modified in [21–30]. With regards to the hybridnetwork, the authors elaborated on two structures. In the formerones [21,24,25], called the polynomial and fuzzy polynomial neu-ron based SOPNN, processing element forming a PD are consideredas polynomial regression equations and fuzzy systems. Two casesare suggested, basic and modified type, for the polynomial basedSOPNN and four cases are also considered for the fuzzy based SOP-NN according to the topologies of the fuzzy system. In the latterones [22,23,26–30], called self-organizing neuro-fuzzy network,fuzzy neural network (FNN) is considered as premise part of thehybrid network. The employed FNN can be identified as severaltypes as genetic optimization process is applied or not, what con-nection point is used for SOPNN, basic type or modified type. To getthe high level of accuracy, these hybrid networks tend to generateoverly complex architecture and several heuristics (structuretypes, design parameters) affecting the quality of the resultingmodels should be amended and predetermined. This requires that

Page 2: Hybrid GMDH-type modeling for nonlinear systems: Synergism to intelligent identification

1088 D. Kim et al. / Advances in Engineering Software 40 (2009) 1087–1094

a diverse set of heuristic settings should be devised and for eachcase the building process is repeatedly applied until an optimalcombination is found. This leads to models with large number ofparameters and increases the computational burden. Accordingly,the authors carried out several trials where different structuretypes are considered. Upon completion of the experimentationprocess, several hybrid models are available and the designer se-lects the one with the optimal performance.

In this paper, a novel hybrid GMDH-type algorithm is proposedby integrating simple NNs with SOPNN. The hybrid model com-bines NNs and SOPNN into one methodology.

To make the hybrid model simple and reduce some computa-tional burden, only four neurons in the hidden layer of the NNspart are assigned and no specific optimization process is applied.

If many numbers of neurons are chosen, performance of the hy-brid model is going to be good but the network size is getting hea-vy. On the contrary, the neurons are getting less, the size of themodel become small but modeling ability can degrade. In addition,flexible architecture can not be constructed if two neurons are con-sidered. To trade off these criterions, four neurons are decided fromheuristic knowledge.

As a result, we can find a workable synergistic environment andharmonize its own individual advantages. Simulation resultsshowed that the proposed method was much more accurate thanthe other modeling methods; therefore, it can be considered forefficient system identification methodology.

2. Hybrid GMDH-type algorithm

In this section, the hybrid GMDH-type algorithm is presented.This algorithm is obtained by combining NNs with SOPNN.

NNs have recently gained popularity as an emerging and chal-lenging computational technology, and it offers a new avenue forexploring the dynamics of a variety of nonlinear applications.NNs are a flexible mathematical structure capable of identifyingcomplex nonlinear relationships between input and output datasets. NN architectures are useful and efficient, particularly in prob-lems in which the characteristics of the processes are difficult todescribe by using physical equations.

A typical architecture of the feedforward NN consists of one ormore layers of neurons, namely, the input layer, hidden layer, andoutput layer. The input layer of the network does not perform anytreatment, but introduces scaled data to the network. The datafrom the input neurons are propagated through the network viainterconnections. Every neuron in a layer is connected to everyneuron in the adjacent layers. A scalar weight is associated witheach interconnection. The neurons within the hidden layer performtwo tasks: they sum the weighted inputs to the neuron, and thenpass the resulting sum through a nonlinear activation function(for example, the sigmoid function).

One important aspect in the NN development procedure is thelearning process. Among the various NN paradigms, the feedfor-ward NN with the back-propagation algorithm is employed.

The generalized delta rule is applied to adjust the weights of theNN and thus, to minimize the predetermined cost error function.The central part of this learning rule concerns the recursiveacquirement of a gradient vector, in which each element is definedas the derivative of an error measure with respect to a parameter[11]. This is done by means of the chain rule, a basic formula fordifferentiating composite functions. A gradient vector in a networkstructure is generally found by a procedure referred to as back-propagation because the gradient vector is calculated in the direc-tion opposite to the flow of the output of each node.

To use gradient descent to minimize the error measure, weshould observe the following causal relationships (refer to

Fig. 1): a small change in the parameter a will affect the outputof the node containing a; this, in turn, will affect the output ofthe final layer and thus, the error measure. Therefore, the basicconcept in calculating the gradient vector is to pass a form of deriv-ative information starting from the output layer and going back-ward layer-by-layer until the input layer is reached [11].

The topology of any NN determines the accuracy and the do-main of representation of a model. Therefore, the determinationof the numbers of hidden layers and neurons in the hidden layeris more arbitrary and application-dependent. In this paper, a singlehidden layer has been considered, and four neurons in the hiddenlayer were determined.

Since the SOPNN is applied to the hybrid model, its fundamen-tals are briefly explained.

The SOPNN algorithm is based on the GMDH and utilizes a classof polynomials such as linear, quadratic, and modified quadratictypes. For example, specific forms of a PD in the case of two inputsare given as

� Bilinear ðType 1Þ ¼ c0 þ c1x1 þ c2x2 ð1Þ� Biquadratic ðType 2Þ ¼ c0 þ c1x1 þ c2x2 ð2Þ

þ c3x21 þ c4x2

2 þ c5x1x2

�Modified biquadratic ðType 3Þ ¼ c0 þ c1x1 þ c2x2 þ c3x1x2 ð3Þ

where ci is known as regression coefficients.These polynomials are called partial descriptions (PDs). By

choosing the most significant input variables and polynomial typesamong the various types of forms available, we can obtain the PDsin each layer. The SOPNN identifies the model of a nonlinear com-plex system by using the input–output data set. This data set is di-vided into two parts: training and testing data sets. The totalnumber of nodes is given by the combination of a fixed numberof inputs among all input variables. The number of input variablesand types of polynomial of each node are therefore determined inadvance by the designer. By using the chosen input variables andtype corresponding to each node, a PD for each node is constructed.The coefficients of PD by the least square method using a giventraining data set are determined, and finally, the estimated outputof each node is obtained. Furthermore, each PD to check its predic-tive capability for the output variable using the testing data set isevaluated. Then, these values and choose several PDs giving thebest predictive performance are compared. In the sequel, the sec-ond layer is constructed in the same way, considering the outputvariable of each chosen node in the first layer as the new corre-sponding input variable to the second layer. Afterwards, this proce-dure is repeated until the stopping criterion is satisfied. Once thefinal layer is constructed, only the one node giving the best perfor-mance is selected as the output node. The remaining nodes in thatlayer are discarded. Furthermore, all the nodes in the previous lay-ers that do not influence the selected output node are also removedby tracing the data flow path on each layer. And finally, the SOPNNmodel is obtained.

There are two kinds of SOPNN structures, namely the basic andthe modified structure. The basic SOPNN structure consists ofnodes for which the number of input variables is the same in everylayer, but in the modified SOPNN structure, the number of inputvariables to each node in every layer can be changed. Each of themcomes with two cases, case 1 and case 2. In both structures, the or-der of the polynomial in PD can or cannot vary from layer to layer.Detailed design procedures of the SOPNN algorithm can be foundin [13,14].

The overall architecture of the SOPNN is shown in Fig. 2. In Fig. 4input variables (x1, . . ., x4), 3 layers, and a partial description (PD)processing example are considered. Where, Zj�1

p means output ofthe pth node in the j � 1th layer, which is employed as a new inputof the jth layer. Black nodes have influence on the best node (out-

Page 3: Hybrid GMDH-type modeling for nonlinear systems: Synergism to intelligent identification

Fig. 1. Causal relationships of chain rule.

Fig. 2. Overall architecture of the SOPNN.

Fig. 4. Input–output relation of the two-input nonlinear system.

Table 1Taxonomy of various SOPNN structures.

Layer PD type

Number of inputs Order of polynomial

1 P P2–5 q Q

D. Kim et al. / Advances in Engineering Software 40 (2009) 1087–1094 1089

put node) and these networks represent the ultimate model. Mean-while, solid line nodes have no influence over the output node. Inaddition, the dotted line nodes are excluded in choosing PDs withthe best predictive performance in the corresponding layer owingto poor performance. Therefore, the solid line nodes and dottedline nodes should not be present in the final model.

SOPNN [14] is a GMDH-type algorithm and is one of the usefulapproximator techniques. SOPNN has an architecture similar tothat of the feedforward neural networks, whose neurons are re-placed by polynomial nodes. The output of the each node in SOPNNstructure is obtained by using several types of PDs. SOPNNs havefewer nodes than NNs, but the nodes are more flexible. Althoughthe SOPNN is structured by a systematic design procedure, it hassome drawbacks. If a small number of input variables are available,SOPNN does not maintain good performance. Because of the pro-posed model, we can handle the limitation of the SOPNN and find

Neural Network part

x1

xm

xj

xj (j=1, , m)

vqj wiq

zq (q=1, , l) yi (i=1, , n)

Hybrid GMDH-

Fig. 3. Architecture of hybr

a workable synergistic environment. These are the main differ-ences with conventional SOPNN.

The proposed model dwells on the ideas of combining NNs withthe SOPNN. The hybrid architecture of the proposed method isshown in Fig. 3. NNs and SOPNN merged together with the cascadeconnection for the nonlinear system.

As shown in Fig. 3, the hybrid GMDH-type model has two mainparts, the neural network and SOPNN. The architecture of the net-work considering the functionality of the individual parts is de-picted as follows.

SOPNN part

PD

PDPD

PD

PD

PD

PD

PD

PD

y

w11

w12

w13

w14

type model

id GMDH-type model.

Page 4: Hybrid GMDH-type modeling for nonlinear systems: Synergism to intelligent identification

1090 D. Kim et al. / Advances in Engineering Software 40 (2009) 1087–1094

2.1. Neural network part

For a given a training set of input–output pairs (x, d), the back-propagation algorithm performs two phases of data flow. First, theinput pattern x is propagated from the input layer to the outputlayer and, as a result of this forward flow of data, it produces an ac-tual output y. Then, the error signals resulting from the differencebetween d and y are back-propagated from the output layer to theprevious layers, which then update their weights [12]. We have mprocessing elements in the input layer, l processing elements in thehidden layer, and n processing elements in the output layer;the solid lines show the forward propagation of the signals, andthe dashed lines show the backward propagation of the errors (re-fer to Fig. 3). Detailed back-propagation algorithm can be found in[12].

2.2. Self-organizing polynomial neural network part

The SOPNN also consists of several steps. The steps are asfollows:

Fig. 5. Trend of performance index values of hybrid GMDH-type m

Fig. 6. Trend of performance index values of hybrid GMDH-type m

Step 1: Define the input variables such as x1i ¼ wi11; x2i ¼

wi12; . . . xji ¼ wi

1j related to output variables yi, where jand i are the number of input variables and input–outputdata set, respectively.

Step 2: SOPNN structure is selected on the basis of the number ofinput variables and the order of PD in each layer. Possiblecombinations between them are included in Table 1.Meanwhile, the total number of layers was set to fivein this study. Here, p is identical with q for the combina-tions of P and Q. The experimental range of each variableis defined as p = 2, . . ., N, q = 2, . . ., N, P = 1, 2, 3, Q = 1, 2, 3.

Step 3: Determine the regression polynomial structure of a PD.When r input variables are selected from four input vari-ables x1; x2; . . . ; x4 in the preceding layer, the total num-ber of nodes (PDs) in the current layers determined byk ¼ 4!=ðð4� rÞ!r! .

Step 4: A vector of the regression coefficients of a PD is deter-mined by minimizing the following index

odel with re

odel with re

Ek ¼1

ntr

Xntr

i¼1

ðyi � zkiÞ2; k ¼ 1;2; � � � N!

r!ðN � rÞ! ð4Þ

spect to epochs and layers (Basic SOPNN is used.).

spect to epochs and layers (Basic SOPNN is used.).

Page 5: Hybrid GMDH-type modeling for nonlinear systems: Synergism to intelligent identification

Table 3Results of the hybrid GMDH-type algorithm in Fig. 5.

Neural networks Inputs Basic SOPNN order PI

PI 1st layer 2–5 layer 1st layer 2–5 layer

Hybrid GMDH-type model3.3009 2 Type 1 0.14586

Type 2 0.02230Type 3 0.07571

Table 4Results of the hybrid GMDH-type algorithm in Fig. 6.

Neural networks Inputs Basic SOPNN order PI

PI 1st layer 2–5 layer 1st layer 2–5 layer

Hybrid GMDH-type model3.3009 3 Type 1 0.14586

Type 2 0.02609Type 3 0.03386

Table 2Initial parameter of neural networks.

Parameters Neural networks

Learning rate 0.1Momentum term 0.0123Maximum training epochs 500Number of hidden unit 4

D. Kim et al. / Advances in Engineering Software 40 (2009) 1087–1094 1091

where, zki denotes the output of the kth node with respectto the ith data and ntr is the number of training data. Thisstep is repeated for all selected PDs in each layer.

Step 5: Each PD in the current layer is evaluated with the train-ing. Starting from the PD with the smallest performanceindex measured with the data, PDs are selected by apre-defined number w. In this study, the w was set to30 in each layer. The outputs of the chosen PDs serve asinput to the next layer.

Step 6: SOPNN learning is completed when the number of layerspredetermined by the designer is reached. If the criterionis not satisfied, the outputs of the preserved PDs serve asnew input to the next layer. This is expressed by

Fig. 7. Trend of performance index values of hybrid GMDH-type model with resp

xj1i ¼ zj�1

1i ;

xj2i ¼ zj�1

2i

..

.

xjki ¼ zj�1

ki

ð5Þ

where, zj�11i ; � � � ; z

j�1ki and xj

1i; � � � ; xjki denote the outputs of

the (j � 1)th layer and the inputs of the jth layer, respec-tively. Here, the number of total layers was limited to five.

4. Application to nonlinear static system

In this section, we consider a nonlinear static system with twoinputs, x1, x2, and a single output y:

y ¼ ð1þ x�21 þ x�1:5

2 Þ2; 1 � x1; x2 � 5 ð6Þ

This nonlinear static function has been widely used to evaluatemodeling performance and reported by researchers, such as Sugeno[15], Kim [16–17], and Gomez-Skarmeta [18]. This system repre-sents the nonlinear characteristic as shown in Fig. 4.

Using the above expression Eq. (7), 50 input–output data weregenerated. The inputs were generated randomly, and the corre-sponding output was then computed through the aboverelationship.

The performance index is defined as the mean squared error

PI ¼ 1m

Xm

i¼1

ðyi � yiÞ2 ð7Þ

where yi is the actual output, yi forms the estimated output, and mis the number of data.

The experiment of the hybrid GMDH-type algorithm was car-ried out. The results are summarized in the figures and tables. InFigs. 5 and 6 and Tables 3 and 4, the values of performance indexof the hybrid GMDH-type model with basic SOPNN are shown.Fig. 5 depicts the trend of the performance index values producedof NN and basic SOPNN. After 500 epochs, the final result of NNusing the initial parameters shown in Table 2 was 3.3309, andwhen two input variables and Type 1, 2, and 3 polynomials are em-ployed to every node in all layers to the fifth layer of SOPNN,0.14586, 0.02230, and 0.07571 were obtained, respectively. Theseresults are listed in Table 3.

When three input variables and Type 1, 2, and 3 polynomialsare used to every node in all layers to the fifth layer of SOPNN,

ect to epochs and layers (Modified SOPNN is used.).

Page 6: Hybrid GMDH-type modeling for nonlinear systems: Synergism to intelligent identification

Fig. 8. Trend of performance index values of hybrid GMDH-type model with respect to epochs and layers (Modified SOPNN is used.).

Table 5Results of the hybrid GMDH-type algorithm in Fig. 7.

Neural networks Inputs Modified SOPNN order PI

PI 1st layer 2–5 layer 1st layer 2–5 layer

Hybrid GMDH-type model3.3009 2 3 Type 1 Type 2 0.00451

Type 2 Type 2 0.00128Type 3 Type 2 0.00243

Table 6Results of the hybrid GMDH-type algorithm in Fig. 8.

Neural networks Inputs Modified SOPNN order PI

PI 1st layer 2–5 layer 1st layer 2–5 layer

Hybrid GMDH-type model3.3009 2 4 Type 1 Type 2 0.00451

Type 2 Type 2 0.00022Type 3 Type 2 0.00056

Table 7Results of the hybrid GMDH-type algorithm in Fig. 9.

Neural networks Inputs Modified SOPNN order PI

PI 1st layer 2–5 layer 1st layer 2–5 layer

Hybrid GMDH-type model3.3009 3 2 Type 1 Type 2 0.02919

Type 2 Type 2 0.02656Type 3 Type 2 0.02018

1092 D. Kim et al. / Advances in Engineering Software 40 (2009) 1087–1094

the trend of the performance index values of the hybrid GMDH-type model is shown in Fig. 6, and the results are listed in Table 4.

When the modified SOPNN was used for the hybrid GMDH-typemodel, the error curve showed trends, as in Figs. 7 and 8. Fig. 7shows the error curves of the proposed model. Here, the parame-ters of the NN are identical to the previous ones used, but thoseof SOPNN are different. Considering the number of input variablesused, two inputs in the 1st layer and three inputs in the 2nd layeror higher were employed, as shown in Fig. 7. For the order type,Type 1, 2, and 3 in the 1st layer and Type 2 in 2nd layer or higherwere used. Final results of the proposed model shown in Fig. 7 arelisted in Table 5.

Fig. 9. Trend of performance index values of hybrid GMDH-type mod

For two inputs in the 1st layer and four inputs in the 2nd layeror higher for the SOPNN, and three inputs in the 1st layer and four

el with respect to epochs and layers (Modified SOPNN is used.).

Page 7: Hybrid GMDH-type modeling for nonlinear systems: Synergism to intelligent identification

PD y

PD

PD

PD

PD

PD

PD

PD

PD

PD

PD

PD

PD

PD

PD

PD

PD

PD

PD

PD

PD

PD

PD

PD

PD

PD

PD

PD

PD

PD

PD

PD

x1

x2

PD

z4

z1

z2

z3

w11

w31

w21

w41

Fig. 10. Final structure of the hybrid GMDH-type model.

10 20 30 40 501.0

1.5

2.0

2.5

3.0

3.5

4.0

4.5

5.0

5.5

y

Data number10 20 30 40 50

Data number

: Actual output: Model output

-1

0

1

2

3

4

5

Err

ors

(a) actual output versus mode (b) error

Fig. 11. Identification performances of the final hybrid GMDH-type model.

Table 8Comparison of identification performance between our model and some previousmodels.

Model MSEPI

Sugeno [15] 0.0790.010

Kim [16] 0.0197Kim [17] 0.0089Gomez-Skarmeta [18] 0.070Hwang [19] 0.073Lin [20] 0.0035Pedrycz [14] Impossible

Pedrycz [21] Advanced-PN-modified-case 1 type 0.0041Advanced-FPN-modified-case 2 type 6.9e-20

Pedrycz [22] Basic-case 1 type 0.0033Modified-case 2 type 0.0023

Pedrycz [24] Advanced-basic-case 1 type 0.0212Advanced-modified-case 1 type 0.0041

Our model 0.02230.00020.0005

D. Kim et al. / Advances in Engineering Software 40 (2009) 1087–1094 1093

inputs in the 2nd layer or higher, the error curves of the proposedmodel are as shown in Figs. 8 and 9, respectively. The order typesand modeling options of NN are identical to the previous one. Ta-bles 6 and 7 summarize the final results of Figs. 8 and 9,respectively.

The final structure of the proposed hybrid GMDH-type modeland its identification performance are shown in Figs. 10 and 11,respectively. The model output follows the actual output very wellwhere the values of the performance index of the proposed methodare equal to 0.00022, which is shown in Table 6. With respect tothe versatility of conventional SOPNN, the SOPNN can not con-struct its architecture flexibly when two or three input variablesare considered. Therefore, the approximation capabilities of theSOPNN are limited. But as can be seen from Figs. 10 and 11 and Ta-ble 6, the proposed system shows superior performance and itsversatile architecture.

Table 8 shows the performances of our proposed method andother models studied in the previous literature. The experiment re-sults showed that our hybrid GMDH-type model offered encourag-ing advantages and had good performance.

In Table 8, time-consuming for the results of proposed model,0.0223, 0.0002 and 0.0005 are 0.48 s, 30.65 s and 25.65 s, respec-tively, when we are using Pentium 4 CPU 2.8 GHz with 760 MBRAM.

5. Conclusions and further research

In this paper, we introduced a new hybrid GMDH-type modelusing neural networks and self-organizing model. The proposemodel is based on the idea of combining NNs with the SOPNN intoone methodology. NNs are a promising new generation of informa-tion processing systems that demonstrate the ability to learn, re-call, and generalize from training data. The GMDH-typealgorithm provides an automated selection of essential input vari-

Page 8: Hybrid GMDH-type modeling for nonlinear systems: Synergism to intelligent identification

1094 D. Kim et al. / Advances in Engineering Software 40 (2009) 1087–1094

ables and builds hierarchical polynomial regression of necessarycomplexity. This hybrid synergistic integration reaps the benefitsof both NNs and SOPNN. That is, NNs provide connectionist struc-ture and learning abilities to SOPNN, and the SOPNN provide theNNs with a structural framework with optimal complexity. Thesebenefits can be witnessed in serial connection of two methods.

In the simulation studies, the hybrid GMDH-type model was ap-plied to a nonlinear static system. These studies showed that theproposed modeling technique was a sophisticated and versatilearchitecture capable of constructing models from a limited dataset and producing superb results. For performance improvementand network simplification, a global optimization technique suchas a genetic algorithm needs to be applied.

Acknowledgment

This work was supported by the Korea Research FoundationGrant funded by the Korean Government (MOEHRD): KRF-2007-357-H00006. This work was supported by the Korea ResearchFoundation Grant funded by the Korean Government (MOEHRD,Basic Research Promotion Fund) (KRF-2008-521-D00137).

References

[1] Funahashi KI. On the approximate realization of continuous mappings byneural networks. Neural Networks 1989;2(3):183–92.

[2] Thawonmas R, Abe S. Function approximation based on fuzzy rules extractedfrom partitioned numerical data. IEEE Trans Syst Man Cyber 1999;29(4):525–34.

[3] Ivakhnenko AG. Polynomial theory of complex systems. IEEE Trans Syst ManCyber 1971:364–78.

[4] Ivakhnenko AG, Krotov GI, Ivakhnenko NA. Identification of the mathematicalmodel of a complex system by the self-organization method. In: Halfon E,editor. Theoretical systems ecology: advances and case studies. NewYork: Academic; 1970 [Chapter 13].

[5] Ivakhnenko AG, Ivakhnenko NA. Long-term prediction by GMDH algorithmsusing the unbiased criterion and the balance-of-variables criterion. Sov AutomContr 1974;7:40–5.

[6] Ivakhnenko AG, Ivakhnenko NA. Long-term prediction by GMDH algorithmsusing the unbiased criterion and the balance-of-variables criterion, part 2. SovAutom Contr 1975;8:24–38.

[7] Farlow SJ. Self-organizing methods in modeling. GMDH type-algorithms. NewYork: Marcel Dekker; 1984.

[8] Kondo T, Pandya S. Identification of the multi-layered neural networks byrevised GMDH-type neural network algorithm with PSS criterion. Lect NotesArtif Intell 2004;3214:1051–9.

[9] Kondo T. Identification of radial basis function networks by using revisedGMDH-type neural networks with a feedback loop. In: Proceedings of the 41thSICE annual conference, vol. SICE 02-0652; 2002. p. 1–6.

[10] Tamura H, Kondo T. Heuristics free group method of data handling algorithmof generating optimum partial polynomials with application to air pollutionprediction. Int J Syst Sci 1980;11(9):1095–111.

[11] Jang J, Sun C, Mizutani E. Neuro-fuzzy and soft computing: a computationalapproach to learning and machine intelligence. Prentice-Hall; 1997.

[12] Lin C, George Lee C. Neural fuzzy systems: a neuro-fuzzy synergism tointelligent systems. Prentice-Hall; 1996.

[13] Oh SK, Kim DW, Park BJ. A study on the optimal design of polynomial neuralnetworks structure. Trans KIEE 2000;49D(3) [in Korean].

[14] Oh SK, Pedrycz W. The design of self-organizing polynomial neural networks.Inf Sci 2002;141:237–58.

[15] Sugeno M, Yasukawa T. A fuzzy-logic-based approach to qualitative modeling.IEEE Trans Fuzzy Syst 1993;1(1):7–31.

[16] Kim ET, Park MK, Ji SH, Park M. A new approach to fuzzy modeling. IEEE TransFuzzy Syst 1997;5(3):328–37.

[17] Kim E, Lee H, Park M, Park M. A simply identified Sugeno-type fuzzy model viadouble clustering. Inf Sci 1998;110:25–39.

[18] Gomez-Skarmeta AF, Delgado M, Vila MA. About the use of fuzzy clusteringtechniques for fuzzy model identification. Fuzzy Sets Syst 1999;106:179–88.

[19] Hwang HS, Woo KB. Linguistic fuzzy model identification. IEE Proc ControlTheory Appl 1995;142(6).

[20] Lin Y, Cunningham III GA. A new approach to fuzzy-neural system modeling.IEEE Trans Fuzzy Syst 1995;3(2):190–8.

[21] Oh SK, Pedrycz W. Self-organizing polynomial neural networks based onpolynomial and fuzzy polynomial neurons: analysis and design. Fuzzy SetsSyst. 2004;142:163–98.

[22] Park BJ, Oh SK, Pedrycz W. Fuzzy polynomial neural networks: hybridarchitectures of fuzzy modeling. IEEE Trans Fuzzy Syst 2002;10:607–21.

[23] Oh SK, Pedrycz W, Park BJ. Self-organizing neurofuzzy networks based onevolutionary fuzzy granulation. IEEE Trans Syst Man Cyber-A 2003;33:271–7.

[24] Oh SK, Pedrycz W, Park BJ. Polynomial neural networks architecture: analysisand design. Comput Electr Eng 2003;29:703–25.

[25] Oh SK, Pedrycz W, Park HS. Self-organising networks in modelingexperimental data in software engineering. IEE Proc Comput Digit Technol2002;149:6178.

[26] Oh SK, Pedrycz W, Park BJ. Relation-based neurofuzzy networks withevolutionary data granulation. Math Comput Modell 2004;40:891–921.

[27] Oh SK, Pedrycz W, Park BJ. Self-organizing neurofuzzy networks in modelingsoftware data. Fuzzy Sets Syst 2004;145:165–81.

[28] Park BJ, Oh SK, Pedrycz W, Ahn TC. Information granulation-based multi-layerhybrid fuzzy neural networks: analysis and design. LNCS 2004;3037:188–95.

[29] Oh SK, Park BJ, Pedrycz W, Ahn TC. Genetically optimized hybrid fuzzy neuralnetworks based on simplified fuzzy inference rules and polynomial neurons.LNCS 2005;3514:798–803.

[30] Oh SK, Pedrycz W, Park BJ. Multilayer hybrid fuzzy neural networks: synthesisvia technologies of advanced computational intelligence. IEEE Trans Circ Syst-1 2006;53:688–703.