6
A modified neural network based on subtractive clustering for bidding system Min Han, Yingnan Fan, Wei Guo School of Electronic and Information Eng., Dalian Univ. of Technology, Dalian, China, 116023 E-mail: [email protected] Abstract-The paper presents a modified neural network based on subtractive clustering (NN-SC). It can be used to estimate the mark-up of construction bidding system. In recent years, many neural fuzzy approaches to model are proposed. But they are limited for complex and arbitrary in computation and structure. In this paper, the NN-SC is proposed to overcome the drawbacks mentioned above and have fuzzy inference and self-learning ability. It uses subtractive clustering to generate rules and form rulebase. With rule inference steps, it is convenient to determine the degree of applicability for each rule. Therefore, it has high degree of transparency, compact structure and computational effi'ciency. And based on neural network, nonlinear mapping between input and output is accomplished. With the simulation, it is proven that the proposed network is valid and has good performance. Index Terms-neural network; subtractive clustering; rule; biding system I. INTRODUCTION Recently, neural fuzzy approach to system modeling has become a popular research focus. The key advantage of neural fuzzy approach over traditional ones lies on that the former doesn't require a mathematical description of the system while modeling [1]. Moreover, in contrast to pure neural network or fuzzy methods, neural fuzzy method possesses both of their advantages; it brings the low-level learning and computational power of neural networks into fuzzy systems and provides the high-level human-like thinking and reasoning of fuzzy systems into neural networks. A general fuzzy logic system contains four major components [2]: fuzzifier, rule base, inference engine, and defuzzifier. Rule base and defuzzifier are the most important procedure in the four phases because they can improve the transparence of neural network. Although Fuzzy logic introduced into neural network has many advantages, it exists some disadvantages that have limited its application, such as the increasing computation of four components in fuzzy logic system and problem of obtaining the rulebase. In addition, the choice of membership function of fuzzy sets is rather arbitrary for anybody [3]. To remove the shortcomings of neural fuzzy system, a modified neural network based on subtractive clustering is proposed. The main issues associated with a neural fuzzy system are 1) parameter estimation, which involves determining the parameters of premises and consequences, and 2) structure identification, which concerns partitioning the input space and determining the number of fuzzy rules for a specific performance [4]. The parameters can be adjusted by BP algorithm or hybrid learning algorithm. In general, the structure identification involves initial rule generation in the form of IF-THEN in terms of fuzzy sets of data dimensions [5] which would approximate the final rule base. To get fuzzy rules, several algorithms have been developed to generate fuzzy rules from numerical training data. For a given data set, different fuzzy rules can be obtained using different identification methods. Grid partitioning (GP) [6] and fuzzy clustering methods are used to identify the antecedent part of the neural fuzzy system popularly. Subtractive clustering [7] is an efficient fuzzy clustering algorithm. Compared with GP method, it need not optimize. Therefore, it is a good choice for the initialization of neural fuzzy network. Fuzzy c-means and other optimization-based clustering techniques would lead to excessive computer work because they perform a necessary optimization phase before network training. Moreover, progressive clustering and compatible cluster merging algorithms are computationally expensive and need metrics for validation of individual clusters [8].Therefore, despite of their potential, they are too complex to train a network. Chiu's algorithm belongs to the class of potential function methods, being more precisely, a variation of the mountain method. In this class of algorithms, a set of points are defined as possible group centers, each of them being interpreted as an energy source. In subtractive clustering, the center candidates are the data samples themselves [9], which overcomes the main limitation of the mountain method. In fact, there, the candidates are defined in a grid, will lead to "curse of dimensionality" problems. In this paper, the rules of modified neural network are generated by subtractive clustering algorithm to design a new structure. Based on the research of neural fuzzy systems, a modified neural network is proposed. The new structure of neural network is derived from the neural fuzzy network based on T-S model [3]. It is parallel and based on the rules. In this paper, the structure of the model is obtained by means of subtractive clustering. For adopting the new method to define the degree of applicability for IF-THEN rules, it does not use fuzzy membership function, so the fuzzifier and defuzzifier are needless. Hence, the structure of network is more compact. At the same time, the complexity of computation is decreased. 0-7803-9422-4/05/$20.00 ©2005 IEEE 128

[IEEE 2005 International Conference on Neural Networks and Brain - Beijing, China (13-15 Oct. 2005)] 2005 International Conference on Neural Networks and Brain - A modified neural

  • View
    213

  • Download
    1

Embed Size (px)

Citation preview

Page 1: [IEEE 2005 International Conference on Neural Networks and Brain - Beijing, China (13-15 Oct. 2005)] 2005 International Conference on Neural Networks and Brain - A modified neural

A modified neural network based on

subtractive clustering for bidding systemMin Han, Yingnan Fan, Wei Guo

School of Electronic and Information Eng., Dalian Univ. of Technology, Dalian, China, 116023E-mail: [email protected]

Abstract-The paper presents a modified neural networkbased on subtractive clustering (NN-SC). It can be used toestimate the mark-up of construction bidding system. Inrecent years, many neural fuzzy approaches to model areproposed. But they are limited for complex and arbitrary incomputation and structure. In this paper, the NN-SC isproposed to overcome the drawbacks mentioned above andhave fuzzy inference and self-learning ability. It usessubtractive clustering to generate rules and form rulebase.With rule inference steps, it is convenient to determine thedegree of applicability for each rule. Therefore, it has highdegree of transparency, compact structure and computationaleffi'ciency. And based on neural network, nonlinear mappingbetween input and output is accomplished. With thesimulation, it is proven that the proposed network is validand has good performance.

Index Terms-neural network; subtractive clustering;rule; biding system

I. INTRODUCTION

Recently, neural fuzzy approach to system modeling hasbecome a popular research focus. The key advantage ofneural fuzzy approach over traditional ones lies on that theformer doesn't require a mathematical description of thesystem while modeling [1]. Moreover, in contrast to pureneural network or fuzzy methods, neural fuzzy methodpossesses both of their advantages; it brings the low-levellearning and computational power of neural networks intofuzzy systems and provides the high-level human-likethinking and reasoning of fuzzy systems into neuralnetworks. A general fuzzy logic system contains fourmajor components [2]: fuzzifier, rule base, inferenceengine, and defuzzifier. Rule base and defuzzifier are themost important procedure in the four phases because theycan improve the transparence of neural network. AlthoughFuzzy logic introduced into neural network has manyadvantages, it exists some disadvantages that have limitedits application, such as the increasing computation of fourcomponents in fuzzy logic system and problem ofobtaining the rulebase. In addition, the choice ofmembership function of fuzzy sets is rather arbitrary foranybody [3]. To remove the shortcomings of neural fuzzysystem, a modified neural network based on subtractiveclustering is proposed.The main issues associated with a neural fuzzy system

are 1) parameter estimation, which involves determining theparameters of premises and consequences, and 2) structureidentification, which concerns partitioning the input spaceand determining the number of fuzzy rules for a specificperformance [4]. The parameters can be adjusted by BPalgorithm or hybrid learning algorithm. In general, thestructure identification involves initial rule generation in theform of IF-THEN in terms of fuzzy sets of data dimensions [5]which would approximate the final rule base. To get fuzzyrules, several algorithms have been developed to generatefuzzy rules from numerical training data.

For a given data set, different fuzzy rules can be obtainedusing different identification methods. Grid partitioning (GP)[6] and fuzzy clustering methods are used to identify theantecedent part of the neural fuzzy system popularly.Subtractive clustering [7] is an efficient fuzzy clusteringalgorithm. Compared with GP method, it need not optimize.Therefore, it is a good choice for the initialization of neuralfuzzy network. Fuzzy c-means and other optimization-basedclustering techniques would lead to excessive computer workbecause they perform a necessary optimization phase beforenetwork training. Moreover, progressive clustering andcompatible cluster merging algorithms are computationallyexpensive and need metrics for validation of individualclusters [8].Therefore, despite of their potential, they are toocomplex to train a network. Chiu's algorithm belongs to theclass of potential function methods, being more precisely, avariation of the mountain method. In this class of algorithms,a set of points are defined as possible group centers, each ofthem being interpreted as an energy source. In subtractiveclustering, the center candidates are the data samplesthemselves [9], which overcomes the main limitation of themountain method. In fact, there, the candidates are defined ina grid, will lead to "curse of dimensionality" problems. In thispaper, the rules of modified neural network are generated bysubtractive clustering algorithm to design a new structure.Based on the research of neural fuzzy systems, a modified

neural network is proposed. The new structure of neuralnetwork is derived from the neural fuzzy network based onT-S model [3]. It is parallel and based on the rules. In thispaper, the structure of the model is obtained by means ofsubtractive clustering. For adopting the new method to definethe degree of applicability for IF-THEN rules, it does not usefuzzy membership function, so the fuzzifier and defuzzifierare needless. Hence, the structure of network is more compact.At the same time, the complexity of computation is decreased.

0-7803-9422-4/05/$20.00 ©2005 IEEE128

Page 2: [IEEE 2005 International Conference on Neural Networks and Brain - Beijing, China (13-15 Oct. 2005)] 2005 International Conference on Neural Networks and Brain - A modified neural

It is obvious that the modified neural network model notonly has the characteristics of fuzzy systems, but also hassome advantages of neural network. To illustrateperformance of this network, it is used to the constructionbidding system.The paper is organized as follows. In section II, the

subtractive clustering is introduced, including the choice ofparameters and acquisition of rules. Section III describesthe structure and algorithm of the modified neural network.The simulation results and analyses are provided in sectionIV. Finally, the conclusions are given in section V.

II. RULES GENERATED BASED ON SUBTRACTIVECLUSTERING

Let Zvbe a set ofN data samples, zI, Z2,v', ZN, defined inan m+n space, where m denotes the number of inputs and ndenotes the number of outputs. In order to make the rangeof values in each dimension identical, the data samples arenormalized so that they are limited by a hypercube.As it was referred, it is admitted that each of the samples

defines a possible cluster center. Therefore, the potentialassociated to zi is Eq.(l):

N 12

i (Zi, ZN) =Eexp(-aIz,-z I )j=l

i=l,2,*--,N; i.j (1)4

a

where ra >0 is the radii and defines the neighborhoodradius of each point. Thus, points zj located out of theradius of zi will have a smaller influence in its potential.Consequently, the effect of points close to zi will growwith the proximity. Hence, points with a denseneighborhood will have higher associated potentials. Aftercomputing the potential for each point, the one with thehighest potential is selected as the first cluster center. Next,the potential of all the remaining points is reduced.Defining z4 as the first group center and denoting itspotential as P7, the potential of the remaining points isreduced as in Eq.(2):

P v P - Pj exp(-fiz -Z11j2)i=1,2,---,N; i#j (2)4rb

where the radii r,>0 defines the neighborhood radius withsensitive reductions in its potential. Accordingly, pointsclose to the selected center will have their potentialsreduced in a more significant manner. So the probability ofbeing selected as centers diminishes. This procedure hasthe advantage of avoiding the concentration of identical

clusters in denser zones. Therefore, rb is selected in order tobe slightly higher than ra to avoid closely spaced clusters, forexample, rb=l.5ra. With the reduction of potential for all ofthe candidates, the one with the highest potential is selectedas the second cluster. Then, the potential of the remainingpoints is again reduced. Generically, after the rthl group isdetermined, the potential is reduced as follows Eq.(3):

(3)The procedure of center selection and potential reduction is

repeated until the following stopping criterion [10] isreached:

1) lf Pk > EpI accept zk as the next cluster center andcontinue.

2) If Pk < ,E,reject 4 and finish the algorithm.3) Let dmi, be the shortest distance between z4 and all the

centers already foundIf dm,I,/r +IP/i .1, accept zk as the next cluster

center and continueOtherwise, reject Z and assign it the potential 0.0,

select the point with higher potential as the new Zk,and repeat the test.

Where k==[l , ,r], £ is the upper threshold that the point isselected as a center with no doubts and c is the lowerthreshold below that the point is definitely rejected. The thirdcase is the point characterized by a good tradeoff betweenhaving a sufficiently high potential and being distant enoughfrom the clusters determined before.As it can be understood from the description of the

algorithm, the number of clusters obtained is notpre-specified. However, it is important to note that the radiiparameter is directly related to the number of clusters found.Thus, a small radius will result in a high number of rules,which, if excessive, may be overfitting. On the other hand, ahigher radius will result in a smaller number of clusters,which may generate underfitting. Models with reducedrepresentation are accurate. In practice, it is necessary to testseveral values for radii and select the most adequate oneaccording to the results obtained. This implies an importantadvantage over optimization and other classes of clusteringalgorithms when little information is known regarding theoptimal number of clusters. Another advantage of subtractiveclustering is that the algorithm is noise robust, since outliersdo not significantly influence the choice of centers, due totheir low potentials.

With subtractive clustering, each of the obtained clusterswill constitute a prototype for a particular behavior of thesystem under analysis. So, each cluster can be used to definea fuzzy rule capable of describing the behavior of the systemin some region of the input-output space. The rules can beused to design the structure of the neural network.

129

.-P*e -PPi <- Pi r iizj z* 11'r

Page 3: [IEEE 2005 International Conference on Neural Networks and Brain - Beijing, China (13-15 Oct. 2005)] 2005 International Conference on Neural Networks and Brain - A modified neural

III. THE NERURAL NETWORK BASED ONSUBTRACTIVE CLUSTERING

A. Determination ofthe Degree ofApplicabilityThe ability of the modified neural network is analogous

to the neural fuzzy network based on T-S model, while thestructure is a neural network, and it has learning ability.Based on the rules obtained by subtractive clustering candescribe the behavior of the system, it is a main part of thenetwork that the degree of applicability for each rule isdetermined. According to the input vector, the degree ofapplicability can be determined for each IF-THEN rule ateach time sample.

In neural fuzzy systems, the degree of applicability foreach rule is determined by means of choice of fuzzymembership function and computation of fuzzymembership degree. In this paper, Gauss function ischosen as fuzzy member function and shown in Eq.(4):

1k = exp[ (x 2- )21I = 1,2,...m;k = 1,2, ... n (4)

Where lIk denotes the membership degree for kth languageterm of xi, X4=[xl, ", xl,", x,,m] is input vector, xIk and olkdenote the center and width of Gaussian membershipfunction, n, denotes the number of membership function.Therefore, the maximum of P1k is obtained and denoted by

P11i = maxulk I = 1, 2,**,m (5)Then, the degree of the applicability uj for the jth rule

is defined:m m

Pi=flu or p,=flp (6)1 1 =1-

Fuzzy sets for the linguistic terms can be defined bydifferent number and type membership functions, so thedetermination of the degree of applicability is ratherarbitrary and subjective, being dependent on someempirical knowledge and prior information [3]. In order toovercome these drawbacks, the Euclidean distancebetween input vector Xi and center C is used to determinedegree of applicabilityu,uJX1) for IF-THEN rules in Eq.(7):

Pi (Xi) = I-jlix - c11i=1,2, -N; j = 1,2, --,R (7)

Where Cj is the center generated by subtractive clusteringalgorithm, R is the number of rules. When the input vectorXi approaches the center Cj more closely, the degree ofapplicability will be closer to unity and the premises inIF-THEN rules are more closely fulfilled. Using thismethod, the network needn't carry out fuzzifier anddefuzzifier step, so the structure of the neural network ismore compact and the efficiency of computation isimproved.

B. Structure ofthe ModifiedNeural NetworkThe multi-input and multi-output system is composed of

many multi-input and single-output systems in parallel way,so in this paper the network is designed to multi-input andsingle-output system. The structure of the modified neuralnetwork based on is shown in Fig. 1.

Consequent subnetwork

x__

r 1,layer layer layer laye // OutputlayerIL _-- . _ _ ff,

I I

Input layerRule layer

Antecedent subnetwork

Fig. 1 Structure of the modified neural network

The network is made up of two subnetworks: theantecedent subnetwork and the consequent subnetwork. Theantecedent network is designed to match the "IF" part offuzzy rules and generate the degree of applicability of eachrule for the input vector. And the consequent network isapplied to compute the "THEN" part of fuzzy rules. Theoutput of the whole neural network is weighted sum of theconsequent value of all rules, which the weights are thedegrees of applicability for each rule.The antecedent network contains two layers: input layer

and rule layer. Input layer passes on the input data to rulelayer. In rule layer, the degree of applicability of each rule forthe input vector is determined by Eq.(8):

22 = 1_ II(I2W-C,)J2 2 (8)where Ij22, o22 denote the input and output of the 2th layer

(i.e. rule layer) in antecedent subnetwork, o is theconnection weights between input layer and rule layer. Thenumber of nodes in this layer is equal to the number of rules.The degree of applicability is viewed as weight of middleoutput layer ofthe consequent subnetwork.The consequent subnetwork contains five layers: input

layer, hidden layer, middle output layer, multiplier layer andoutput layer. The structure is different from standard BPneural network because a middle output layer is inserted andthere is no given teach signal.The input layer is the first layer of input layer in

consequent subnetwork. And input and output can be denotedas flows:

I>=X;i O =11

130

Page 4: [IEEE 2005 International Conference on Neural Networks and Brain - Beijing, China (13-15 Oct. 2005)] 2005 International Conference on Neural Networks and Brain - A modified neural

The second layer is hidden layer, in which the nonlinearmapping is implemented from input data to output value.And input and output can be denoted as flows:

NkIk =Z i XOik; k f(Ik)

k=1

The third layer is middle output layer. The number ofnodes equals to the number of rules. The output is theconsequent value of each rule. And Input and output can bedenoted, respectively:

RI= X

j=loil = f(I' )

The fourth layer is multiplier layer. In this layer, thedegree of applicability as weight products the output of themiddle output layer as the consequent value of rules. Thenumber of nodes also equal to the number of rules. So, theinput and output are:

I, = &j X 022. o4 = 4

where o22 = uj is the degree of applicability for the jthrule, i.e. one of the output in antecedent subnetwork.

The fifth layer is output layer.R R

2I15 =L ,4 I25 0E} 5=j-l j=l

It can be seen that the proposed network depends onfuzzy rules and rule inference, nevertheless it has notcomplete fuzzifier and defuzzifier steps in the antecedentnetwork. Compared with neural fuzzy system, it is morecompact and simpleness. In the modified neural network,the antecedent and the consequent subnetworks are paralleland can process data at the same time. So the speed oftraining and stable and robust ability are improved.C. Learning AlgorithmBased on the neural network in the Fig. 1, the output of

the whole network can be described as Eq.(9):R R

Y E PiYj I E Hi (9)j=l j=l

Where pj is the degree of applicability of the jth rule, yjis the output of middle output layer, R is the number ofrules.The quadratic error function is performance function of

the neural network:

EP (yy_YP)2 (10)I N

E-NEEP ~~~~~(11)Where N denotes the number of patterns, yP is the targetoutput of the pth pattern, /P is the output of the model ofthe pth pattern.The partial derivative of performance function to the

actual output of model is shown in Eq.(12):

=ly-fYP) (12)

In this paper, Sigmoid function is chosen as mappingfunction of hidden layer and middle output layer. TheSigmoid mapping function is denoted by Eq.(13):

O = f(I)= 1

And its derivative is:0 = f '(I) = ff(I)[l - f(I)]

The partial derivative for the adder andoutput layer is described in Eq.(15) and (16):

ay ay 815 lao4 aI5 a04 IIj 1 j 2

(13)

(14)the divider in

(15)

ao22 = 5a 22 =- X (16)j i2 a j2 I (25)

Eq.(17) and (18) give the partial derivative of themultiplier:

22 (17)

d4J = o3 (18)

In this paper, gradient descent algorithm with momentumterm is adopted for training. The weights are adjusted asfollows:

aEw(n + 1) = ±x+(7+ aw(@(n) - o(n -1))

where q is the learning rate, a is the momentum constant.In the antecedent network, the increment of weights can be

expressed:ao=22 m I

J9c-(Z(IJ2-)2) 2xWi (19)

aEp a ao22

ac ao22 8w

The increment of the consequent network is similar.

IV. SIMULATIONS

In this section, the neural network based on subtractiveclustering is used to the bidding system in constructionindustry. Estimating the mark-up percentage may be the mostimportant and difficult decision for a contractor when makinga bid for a project as the mark-up estimate has to be lowenough to win the contract but high enough to make a profit[10]. Since the pioneering works of Friedman (1956) andGates (1967), the construction industry has been presentedwith a number of analytical and numerical models for thecalculation and optimization of bid mark-ups [11]. There aremany intelligent models [10-12] including neural network

131

Page 5: [IEEE 2005 International Conference on Neural Networks and Brain - Beijing, China (13-15 Oct. 2005)] 2005 International Conference on Neural Networks and Brain - A modified neural

and neural fuzzy system to estimate the mark-up. Theproposed neural network in this paper extracts theadvantages of neural fuzzy system, which is much moreadapt to the mark-up estimation.

Identification of major factors that affect mark-updecisions has become a significant research area. Differentresearchers have identified and proposed different sets offactors [10][12]. This paper uses nine factors which havebeen processed by variable selection [13] as the input ofthe network, and the mark-up (MP) is viewed as the output.Table I shows the definitions and value ranges of each ofthe nine factors. There are 75 bidding examples ofbuildingprojects collected as the training and test data from realengineering, in which 70 examples for training and othersfor testing. Table II gives ten sets of input and output data,and it is part ofthem.

First, the pattern data are normalized to the range [0,0.9]. On one hand, this process can speed up the training ofthe network and avoid the maximum and minimum ofsigmoid function. On the other hand, it makes all the inputand output values were of the same order of magnitude forimplementation of subtractive clustering. Second, thecenters obtained from subtractive clustering are put intothe mle layer of the antecedent network. Here, the number

of input and middle output are confirmed. The paper useexperience formula h = Vn +m + a to computer the numberof hidden nodes, where n is the number of middle outputlayer's nodes, m is the number of input nodes, a is constant inthe range [1, 10].

Fig.2 and Fig.3 contrast the simulation results of threemodels, including training errors and test errors, respectively.The models contain improved BP network (Improved BP)[13], multi-input fuzzy neural network (MFNN) [14] andNN-SC. Obviously, compared with the improved BP networkand MFNN, the NN-SC proposed in this paper is given betterperformance, such as higher precision, lower error and moresatisfied generalization. Table Im lists the values of errors anderror ratio.

Because condition of bidding system is not entire same tothe past each time, test data to estimate the mark-up may bedifferent from training data. So the generalized ability of theneural network is more important in reality, and it determinesthe precise of estimation. It is observed that the NN-SC inthis paper has better generalized ability from Fig.3. Table IIIgives the final results of the whole procedure with differentnetworks. When the training procedure finished (200 epochs),test error of the NN-SC drops to 0.0092, and test errors ofimproved BP and MFNN are 0.0730 and 0.0509, respectively.

Table I

Mark-up Factors, Definitions and Value Ranges

No. Mark-up factor Definition Value Range1 Market conditions (MC) Current construction market Good=l; Medium=0.5; Bad=02 Number of competitors (NoC) Number ofbidders on the project Maximum=20; Minimum=l3 Startup capital (SC) dollar volume estimated of the project Dollars ($)4 Overhead rate (OR) What overhead rate required is required? Percentage5 Labor availability (LA) Is local labor readily available? With=1; Without=06 Location (L) Is the project within company boundaries? Yes=l; No=07 Project complexity (PC) What is the degree of complexity of the project? High=0; Medium=0.5; Low=08 Time limit for a project (TLP) Time ofa project compared with bid book Ahead=l;Equal=0.5;Delayed=0;9 Construct condition (CC) Exterior environment for construct Good=1; Medium=0.5; Bad=0

Table II

Parts of Input-output Data (ten patterns)

Input variables Output\__ MC NoC SC OR LA Lo PC TLP CC MP

1 0.5 4 0.1000 3.7 0.5 1 1.0 0.3 0.6 6.42 0.0 9 0.0873 6.3 1.0 1 0.5 0.8 0.5 7.63 1.0 7 0.0726 4.5 0.5 1 0.0 0.6 0.6 8.04 0.5 6 0.0781 5.0 0.5 1 0.5 0.4 0.6 7.45 1.0 4 0.0388 5.2 1.0 1 0.5 0.8 0.5 7.26 0.5 5 0.1140 6.3 1.0 1 0.5 1.0 -0.5 8.37 1.0 8 0.0510 4.2 0.5 1 1.0 0.7 0.5 6.88 1.0 6 0.0760 5.2 1.0 0 1.0 )0.6 0.4 7.69 0.5 7 0.0897 4.8 0.5 1 0.5 0.5 0.5 i.910 1.0 5 0.0871 4.7 0.5 1 0.5 0.4 0.5 7.1.. ....... ... ... ... ... ... ... ...

132

Page 6: [IEEE 2005 International Conference on Neural Networks and Brain - Beijing, China (13-15 Oct. 2005)] 2005 International Conference on Neural Networks and Brain - A modified neural

0.12

0.10

1 0.08

1 0.060~

0.04

0.02

0 20 40 60 80 100 120 140 160 180 200Epoch

Fig.2 Training error of three models

0.12

0.10

0.08

1 0.06

1 0.04

0.02

0

0 20 40 60 80 100 120 140 160 180 200Epoch

Fig.3 Test error of three models

Table HI

Comparisons of three models

Models Final training Final test Average errorerror error ratio (MP%)

Inproved BP 0.0437 0.0730 6.47

MFNN 0.0383 0.0509 4.14

NN-SC 0.0067 0.0092 1.94

Note: Average eror ratio=-±EIYPly I/YP, Where N is the number of

test patters, N=5.

V. CONCLUSION

A modified architecture of neural network based on

subtractive clustering has been proposed and it is used toestimate the mark-up of construction bidding system in thispaper. With subtractive clustering and a new way todetermine the degree of applicability for each rule, theproposed neural network has more compact structure andhigher computational efficiency consequently. The rulebaseand rule inference steps of fuzzy neural network are stillapplied in the neural network, so the network has highdegree of transparency. From the results of simulation, it isobserved that the proposed neural network is more

excellent in converge speed, training accuracy andgeneralized ability than other models. It demonstrates thatthe modified neural network based on rules and subtractiveclustering has higher practicability in real engineering.

ACKNOWLEDGMENT

This research is supported by the project (60374064) of theNational Natural Science Fowudation of China. It is alsosupported by the project (50139020) of the NationalNatural Science Foundation of China. All the supported isappreciated.

REFERENCES

[1] Chiafeng Juang and Chinteng Lin, "An On-Line Self-ConstructingNeural Fuzzy Inference Network and Its Applications," IEEE Trans.Fuzzy Systems, vol. 6, no. 1, pp. 12-32, 1998.

[2] Chienho Ko and Minyuan Cheng, "Hybird use of Al techniques indeveloping construction management tools," Automation inConstruction, vol.12, no.3, pp.271-281, 2003.

[3] Lihua Xiong, Asaad Y.Shamseldin and Kieran M.O'connor, "Anon-linear combination of the forecasts of rain-runoff models by thefirst-order Takagi-Sugeno fuzzy system," Journal of Hydrology,vol.245, pp.196-217, 2001.

[4] Shiqian Wu, Meng Joo Er, and Yang Gao, "A Fast Approach forAutomatic Generation of Fuzzy Rules by Generalized DynamicFuzzy Neural Networks," IEEE Trans. Fuzzy Systems, vol. 9, no. 4,pp. 578-594, 1998.

[5] K. Demirli, S.X. Cheng and P. Muthukumaran, "Subtractiveclustering based modeling of job sequencing with parametricsearch," Fuzzy Sets and Systems, vol.137, no.2, pp. 235-270, 2003.

[6] Jang, J.S.R., "ANFIS: adaptive network-based fiuzzy inferencesystem," IEEE Trans. Systems, Man, and Cybernetics, vol.23, pp.665-685, 1993.

[7] S.L. Chiu, "Fuzzy model identification based on cluster estimation,"Journal ofIntelligent and Fuzzy Systems, vol. 2, no.3, pp. 267-278,1994.

[8] R.N. Dave and R. Krishnapuram, "Robust clustering methods: a

unified view," IEEE Trans. Fuzzy Systems, vol.5, pp.270-293, 1997.[9] R. P. Paiva and A. Dourado, "Interpretability and learning in

neuro-fuzzy systems," Fuzzy Sets and Systems, vol.147, no.1,pp.17-38, 2004.

[10] H. Li, L. Y. Shen and P. E. D. Love, "ANN-based mark-upestimation system with self-explanatory capacities," ConstructionEngineering and Management, vol. 125, no.3, pp.185-189, 1999.

[11] Symeon Christodoulou and A.M, "Optimum Bid MarkupCalculation Using Neurof:izzy Systemsand Multidimensional RiskAnalysis Algorithm," Computing in Civil Engineering, vol.18, no.4,pp. 322-330, 2004.

[12] Mohammed Wanous, Halim A.Boussabaine and John Lewis, "Aneural network bid/no bid model: the case for contractors in Syria,"Construction Management and Economics, vol. 21, pp.737-744,2003.

[13] Han Min, Lin Yun, et al. "Study of construction bidding systembased on neural networks," Journal ofsystems engineering, vol. 18,no.4, pp. 348-355, 2003. (in Chinese)

[14] Han Min and Sun Yannan, "Multi-Input Fuzzy Neural Network andIts Application," Systems Engineering and Electronics, vol.25,no.10, pp.1249-1252, 2003. (in Chinese)

133

-Improved BP---MFNN

NN-SC

--- -