Transcript
Page 1: [IEEE NAFIPS 2005 - 2005 Annual Meeting of the North American Fuzzy Information Processing Society - Detroit, MI, USA (26-28 June 2005)] NAFIPS 2005 - 2005 Annual Meeting of the North

NAFIPS 2005 - 2005 Annual Meeting of the North American Fuzzy Information Processing Society

The Evolution of Neuro-Fuzzy SystemsDetlef D Nauck

BT, Research and VenturinigAdastral Park, MLB 1/12Ipswich, IP5 3RE, [email protected]

Abstract-When the popularity of fuzzy systems in the guiseof fuzzy controllers began to rise in the beginning of the 1990sresearchers became interested in supporting the developmentprocess by an automatic learning process. Just a few yearsearlier the backpropagation learning rule for multi-layer neuralnetworks had been rediscovered and triggered a massive newinterest in neural networks. The approach of combining fuzzysystems with neural networks into neuro-fuzzy systems thereforewas an obvious choice for making fuzzy systems learn. In thispaper we briefly recall some milestones on the evolution of neuro-fuzzy systems.

I. INTRODUCTIONThe term neuro-fuzzy systems (also neuro-fuzzy methods

or models) refers to combinations of techniques from neuralnetworks and fuzzy systems [1], [2]. This typically does notmean that a neural network and a fuzzy system are used insome kind of combination, but that a fuzzy system is createdfrom data by some kind of (heuristic) learning method that ismotivated by learning procedures used in neural networks.

Neuro-fuzzy methods are usually applied, if a fuzzy systemis required to solve a function approximation problem - ora special case of it like classification or control [3] - andthe otherwise manual design process should be supportedor replaced by an automatic learning process. The (manual)design of a fuzzy system requires specification of fuzzy par-titions (parameters) for each variable and a set of fuzzy rules(structure). If the fuzzy systems does not perform, structureor parameters or both must be modified accordingly. This canbe a very lengthy and error-prone process that is effectivelybased on trial and error. In order to support this designprocess learning techniques based on sample data became apopular research topic at the beginning of the 1990s whenfuzzy systems in the guise of fuzzy controllers first becamesuccessful and widely known.The history of neuro-fuzzy systems can be roughly struc-

tured into feed-forward systems for control and functionapproximation and later the growing interest in classificationand clustering problems. More recently, several researchersstudied the usability of hierarchical and recurrent architectures.

II. FEED-FORWARD ARCHITECTURESA. Control

Neuro-fuzzy controllers were together with neuro-fuzzysystems for function approximation the first neuro-fuzzy ap-proaches. The principles and architectures are similar and the

Andreas NurnbergerSchool of Comnputer Science

Otto-von-Guericke-University of Magdeburg39106 Magdeburg, Germany

[email protected]

main difference is basically the learning mechanism. Whilefunction approximation models can use supervised learningbased on a training set, controllers need to discover a modelin a setting where target outputs are not known. Neuro-fuzzycontrollers therefore use reinforcement learning and requireeither a model of or direct feedback from the process they aresupposed to control.

1) ARIC and GARIC: On of the first neuro-fuzzy con-trollers was suggested by Berenji in 1992. The ARIC model(Approximate Reasoning-based Intelligent Control) imple-ments a fuzzy controller by using several specialized feed-forward neural networks. The architecture of ARIC is similarto an adaptive critic, a special neural controller learning byreinforcement [4], and it generalizes the neural model of Bartoet al. [5] to the domain of fuzzy control. ARIC consists oftwo neural modules, the ASN (Action Selection Network)and the AEN (Action state Evaluation Network). The AENis an adaptive critic that is trained by backpropagation andthat evaluates the actions of the ASN.The ASN itself consists of two feed-forward three-layer

neural networks. One network calculates a confidence valuethat is used to change the output of the second network whichis a direct representation of a fuzzy controller. The input layerrepresents state variables of a process and the hidden unitsrepresent fuzzy rules. Their inputs are the antecedents, andtheir outputs the consequents of the rules. ARIC assumesthat the rule base is known in advance. The output of thecontrol network represents the defuzzified control value of thefuzzy controller. The learning algorithm modifies connectionsweights in the ASN and so indirectly the represented fuzzysets.

Implementing learning by modifying connection weightswas a popular approach in early neuro-fuzzy systems, but itwas later shown that this leads to problems in interpreting thelearning outcome [6].The ARIC model was later extended to GARIC (Generalised

ARIC, Fig. 1) [7], [8], [9]. Like ARIC it consists of anevaluation network (AEN) and an action network (ASN). TheASN does not use any weighted connections, but the learningprocess modifies parameters stored within the units of thenetwork. The other network of the ASN which produces aconfidence measure no longer exists.

2) NEFCON: NEFCON is a model for neural fuzzycontrollers proposed by Nauck in 1994 [10]. It is based on

0-7803-91 87-X/05/$20.00 02005 IEEE. 98

Page 2: [IEEE NAFIPS 2005 - 2005 Annual Meeting of the North American Fuzzy Information Processing Society - Detroit, MI, USA (26-28 June 2005)] NAFIPS 2005 - 2005 Annual Meeting of the North

Fig. 1. GARIC represents a fuzzy system as a feed-forward network. [8]

A

IL

v l v2 3

R, R2 Ri R4 R5

(22A 2 3

, 23 2 .

Fig. 2. A NEFCON system with two input variables and five rules

the architecture of the generic fuzzy perceptron [2] and im-plements a Mamdani-type fuzzy system. NEFCON is probablythe first neuro-fuzzy system that tries to introduce the notion ofinterpretability by preventing identical linguistic terms beingrepresented by more than one membership function.

Like ARIC and GARIC, the learning algorithm for NE-FCON is based on the idea of reinforcement learning butinstead of an adaptive critic network it uses a fuzzy rule baseto describe a fuzzy error.

Fig. 2 shows a NEFCON system with two input variables,one output variable and five rules. The unit R3 for instancerepresents the rule

R3: If 61 is (1) and 42 iS (2) then rj is V2,The connections in NEFCON are weighted with fuzzy setsinstead of real numbers, and some connections always havethe same weight (illustrated by ellipses around connections)in order to ensure the integrity of the rule base.

Several learning methods have been proposed for thismodel. In [11] an overview is given. One major problem ofall methods is, that they require at least some prior knowledgeof the system to be controlled in order to define the (fuzzy)error signal that is used for learning.

Recently, models have been proposed that try to solvethese fundamental problems using hierarchical models and

Fig. 3. ANFIS encodes a Sugeno-type fuzzy model in a feed-forward networkstructure [17]

Q-learning, see, e.g., [12], [13]. Q-learning has been alreadysuccessfully used in combination with neural networks in orderto control more complex systems, see e.g. [14].

B. Approximation

Many neuro-fuzzy systems for function approximation arebased on Takagi-Sugeno fuzzy systems, because they allowthe application of gradient descent learning, if differentiablemembership function (e.g. Gaussians) are used.

1) ANFIS: One of the first and still one of the popularneuro-fuzzy systems is Jang's ANFIS model proposed in 1991[15], [16], [17], [1]. ANFIS (adaptive network-based fuzzyinference system) is a neuro-fuzzy method to determine theparameters of a Sugeno-type fuzzy model which is representedas a special feed-forward network (see Fig. 3). It encodes fuzzyrules of the form

Rr: If x1 is (1~) A ... A x, is then() (r) (r)v ao +a,+ 1x+*...+an Xn.

Each node of the first layer is connected to exactly one ofthe n input variables and stores the three parameters of a mem-bership function. The k nodes in the second layer representthe antecedents of the fuzzy rules. They compute the degreeof fulfillment by multiplying the degrees of membership. Thek nodes of the third layer compute the relative degree offulfillment for each rule. The output values of the rules arecomputed by the k nodes of layer 4. They store the consequentparameters. Each node in this layer is connected (not drawnin Fig. 3) to one node of layer 3 and to all input variables.The output node in layer 5 computes the overall output value.If the model must compute m > 1 output values, then thereare m output nodes and mk nodes in layer 4.ANFIS uses only differentiable functions, and therefore it

is easy to apply standard gradient descent learning proceduresfrom neural network theory. For ANFIS a mixture of back-propagation (BP) and least mean square estimation (LSE) issuggested by Jang [17]. BP is used to compute the updates ofthe antecedent parameters, i.e. the parameters of the fuzzy sets,and LSE is used to determine the updates for the consequentparameters, i.e. the coefficients of the linear combinations inthe consequents.ANFIS does not learn the structure of the fuzzy system,

but it simply creates rules from all possible combinations ofinput fuzzy sets. Initial fuzzy partitions have to be specified.

99

Page 3: [IEEE NAFIPS 2005 - 2005 Annual Meeting of the North American Fuzzy Information Processing Society - Detroit, MI, USA (26-28 June 2005)] NAFIPS 2005 - 2005 Annual Meeting of the North

Oltput laver

4th layerconjunctive, disjunctiveand simnple rules

3rd layercombinations ofsigmoid fimctions tobuild fbizy set medium

2nd layerrepresentation of fn7zy netsby siemoid functions

input layer

Fig. 4. Structure of an NNDFR system [19]Fig. 5. The architecture of a FuNe-I system

The consequent parameters are initialised by small randomnumbers.

2) NEFPROX: NEFPROX [18] is like NEFCON basedon a generic fuzzy perceptron and implements a Mamdani-type fuzzy system. Mamdani-type system are rarely usedfor function approximation purposes, because it is easier totrain a Takagi-Sugeno-type system. NEFPROX has learningalgorithms for structure learning and parameter learning, butit is typically less accurate than ANFIS.

C. Classification

Neuro-fuzzy classification systems became more popularin the second half of the 1990s. They are a special case offunction approximators and their output is typically a fuzzyclassification of a pattem, i.e. a vector of membership degreesthat indicates membership to different classes. With the risinginterest in data mining, fuzzy classifiers became more andmore important in the fuzzy system community.

1) The NNDFR Model: A very interesting and atypicalneuro-fuzzy model from 1991 is the NNDFR model (NeuralNetwork Driven Fuzzy Reasoning) by Takagi and Hayashi[19] which was developed around the same time as AN-FIS. NNDFR is based on common neural networks that arestructured by fuzzy system techniques. Its main purpose isclassification.An NNDFR system is based on n input variables

Xl ... xn, an output variable y and k fuzzy rules R1, . . . , Rk.It consists of k + 1 multi-layer feed-forward neural networkstrained by backpropagation and representing the fuzzy rules(Fig. 4). The system cannot be used to extract the parametersof a fuzzy system from it. Strictly speaking, we would notconsider it as a neuro-fuzzy system. However, the structure ofthe partial networks and the interpretation of the outputs aremotivated by fuzzy system techniques.The linguistic rules used by the NNDFR model are of the

form Rr: If (X1, .. . ,Xn) is Ar then y = Ur(Xl, * ,Xn). Thisis not the usual form of linguistic rules used in fuzzy systemsand is caused by the purely neural architecture. Ar is an n-dimensional membership function. There is no combination ofsingle membership values. In an NNDFR system the neuralnetwork NNmem provides for each rule Rr a value wr that isinterpreted as its degree of fulfillment.

The functions Ul, . . ., Uk are implemented by k neuralnetworks NN1, ... , NNk that determine the output values ofthe rules R1,... , Rk. The overall system output is given byY rke Wr Ur(X1 v...Xn)/Z Wr.

2) FuNe-I: The neuro-fuzzy model FuNe-I [20], [21], pro-posed in 1992, is based on the architecture of a feed-forwardneural network (Fig. 5). The network has five layers. The firstlayer contains a unit for each input variable and propagates theinput values unchanged via weighted links to the second layer.This layer consists of units with sigmoid activation functionsthat are used to create membership functions. The third layercontains specialized units that are only used to represent fuzzysets that do mot touch the domain boundaries (see below). Theunits of the second and third layer propagate their activationsvia unweighted links to the fourth layer. Units from the secondlayer that have connections to the third layer are not connectedto the fourth layer.The fourth layer consists of units that represent fuzzy

rules. Compared to other neuro-fuzzy approaches, the FuNe-I model is special because it uses three kinds of rules: theantecedents can be conjunctions or disjunctions, and thereare rules with only one variable as antecedent (simple rules).A unit computes its activation - depending on the kind ofrule it represents - by either a differentiable soft minimum, adifferentiable soft maximum, or the identity function.The fifth layer contains the output units that compute their

input by a weighted sum and their activation by a sigmoidfunction. The FuNe-I model provides algorithms for structureand parameter learning and is one of the first neuro-fuzzyapproaches that also considers rule learning.

FuNe-I was extended in 1994 to FuNe-Il which can beused for fuzzy control problems. In a FuNe-IH network a newoutput layer is created that is connected to the previous outputlayer. On the connections discrete samples of fuzzy sets arestored to represent control values. The activations of the newoutput units represent support points of a fuzzy set that mustbe defuzzified to obtain the final control value [22], [21].

3) NEFCLASS: NEFCLASS [23], [24], [25], proposed in1995, is probably the first neuro-fuzzy approach that was ableto handle missing values, both numeric and symbolic data inthe same data set and to determine a rule-base fully automati-cally. NEFCLASS is also based on the idea of a generic fuzzy

100

Xi X-) . . 0 Xn

Page 4: [IEEE NAFIPS 2005 - 2005 Annual Meeting of the North American Fuzzy Information Processing Society - Detroit, MI, USA (26-28 June 2005)] NAFIPS 2005 - 2005 Annual Meeting of the North

perceptron and focuses on creating small interpretable fuzzyrule bases.

The learning algorithm of NEFCLASS has two stages:structure learning and parameter learning. Rule (structure)learning is done by a variation of the approach by Wangand Mendel [26] which was extended to cover also symbolicpatterns [27] and to use a rule performance measure for ruleselection. In parameter learning the fuzzy sets are tuned bya simple backpropagation-like procedure that is based on asimple heuristics instead of a gradient descent approach. Afterlearning NEFCLASS uses pruning strategies to reduce thenumber of rules as much as possible.

4) Fuzzy RuleNet: Fuzzy RuleNet [28], which was alsoproposed in 1995, is a neuro-fuzzy approach that is based onthe structure of an radial basis function (RBF) network. It isan extension of the RuleNet model, a special neural network,that can be seen as a variant of an RBF network [29], [30].Instead of the usual radial basis functions - which representhyperellipsoids - RuleNet uses hyperboxes for classification.

Fuzzy RuleNet allows hyperboxes to overlap. Each hy-perbox represents a multidimensional fuzzy set given by amembership function in form of a hyperpyramid. By projectingthe multidimensional fuzzy sets onto the individual dimensionswe obtain triangular or trapezoidal fuzzy sets that describethe pattern features. The fuzzy classification rules obtainedthis way are equivalent to the multidimensional fuzzy sets, i.e.there is no loss of information as it would be in the case ofhyperellipsoids used in fuzzy cluster analysis. The hyperboxesare created in a single cycle through the data. The learningalgorithm adjusts the sizes of hyperboxes by extending themto cover new data or shrinking them in case of conflicts. Thisway a rule base and the parameters (fuzzy sets) are created ina single loop.

If a Fuzzy RuleNet is used for classification it computes itsoutput by a winner-takes-all procedure to find the class of agiven input pattern. It is also possible to adjust the definitionsuch that the outputs are computed by a weighted sum. Thisway Fuzzy RuleNet can be used for function approximation.

Similar approaches to Fuzzy RuleNet are sometimes calledhyperbox-oriented rule learners , and were known as early as1992 [31], [32]. Newer variations are also called fuzzy graphs[33], [34]. The idea is always to cover a set of data withhyperboxes and connect each hyperbox with an output value.Hyperbox-oriented fuzzy rule learning can create solutionsfor benchmark problems in pattern recognition or functionapproximation very fast. If there are no contradictions in thetraining patterns and if there is only one output variable, thenhyperbox-oriented learning algorithms can create solutionswith no errors on the training data. In the worst case thisleads to a situation, where each training pattern is covered byits individual hyperbox.

III. RECURRENT SYSTEMSIn contrast to pure feed-forward architectures that have a

static input-output behavior, recurrent models are able to storeinformation of the past, e.g. prior system states, and can be

B z

subsystem u

Li

I

's~l allIaa II i,~ sussx y __

input variables: x, youtput variables: zinner variables: u

RI: If x(t) is large and z(t-1) is largethen u(t) is small

R2: If x(t) is small and y(t) is largethen u(t) is zero

R3: If y(t) is smallthen u(t) is large

R4: If z(t-1) is largethen z(t) is small

Rs: If u(t) is small and At) is largethen z(t) is zero

R6: If y(t) is smallthen z(t) is large

Fig. 6. Example of a simple hierarchical recurrent rule base consisting oftwo subsystems. The output of the system is reused by each subsystem astime-delayed input.

thus more appropriate for the analysis of dynamic systems(see, for example, discussions concerning the approximationand emulation capabilities of recurrent neural networks [35],[36], [37]). If pure feed-forward architectures are applied tothese types of problems, e.g. prediction of time series data orphysical systems, the obtained system data usually has to bepreprocessed or restructured to map the dynamic informationappropriately, e.g. by using a vector of prior system states asadditional input. If we apply a fuzzy system, this may lead toan exponential increase of the parameters - if we want to coverthe whole system state space - that soon becomes intractable.

Recurrent neuro-fuzzy systems (RNFSs) can be constructedin the same way as discussed above for feed-forward neuro-fuzzy systems. So, they are based either on a recurrent fuzzysystem or a recurrent neural network structure. However,the design and the optimization of (hierarchical) recurrentsystems is, due to the dynamics introduced by the feed backconnections, more difficult than that of feed forward systems.In Fig. 6 an example of a hierarchical RNFS is shown.

Probably the first recurrent fuzzy system that was com-bined with a (neural network motivated) learning method wasproposed by Gorrini and Bersini in 1994 [38]. The proposedsystem is a Sugeno-Takagi-like fuzzy system and uses fuzzyrules with a constant consequent. The internal variables ofthis system may be defined manually, if the designer hassufficient knowledge of the system that should be modeled. Noleaming method for the rule base itself was proposed except toinitialize the rule base randomly. However, the authors proposea learning approach to optimize the parameters of a recurrentrule base, which was motivated by the real time recurrentlearning algorithm [39]. According to Gorrini and Bersini theresults for the approximation of a third order non-linear systemfor a given rule base was comparable to the approximation by arecurrent neural network. Unfortunately, a detailed discussionof the results was not given. Furthermore, the model hadsome insufficiencies. First of all, the structure has to be

101

Page 5: [IEEE NAFIPS 2005 - 2005 Annual Meeting of the North American Fuzzy Information Processing Society - Detroit, MI, USA (26-28 June 2005)] NAFIPS 2005 - 2005 Annual Meeting of the North

Fig. 7. Fuzzy neural network with simple feedback units as proposed byLee and Teng

defined manually, since no leaming methods for the rule basehave been proposed. Furthermore, the leaming is restrictedto symmetric triangular fuzzy sets and the interpretability isnot ensured, since the fuzzy sets are modified independentlyduring leaming. However, an extension to arbitrary (differen-tiable) fuzzy sets is easily possible.

Surprisingly, after this first model, for some time notmany work had been published on recurrent systems that arealso able to learn the rule base itself. Most likely the firstmodels that were successfully applied to control - which,however, do not implement generic hierarchical recurrentmodels as described above - were proposed by Theocharisand Vachtsevanos in 1996 [40], Zhang and Morris in 1999[41] and Lee and Teng in 2000 [42]. For example, Lee andTeng proposed a fuzzy neural network, which implementsa modified Sugeno-Takagi-like fuzzy system with Gaussian-like membership functions in the antecedents and constantconsequents. However, this model did not implement a fullyrecurrent system as shown in Fig. 6, but they restricted them-selves to integrate feed back connections in the membershiplayer as depicted in Fig. 7.

Approaches to leam hierarchical recurrent fuzzy systemwere presented in 2001 by Surmann and Maniadakis [43], whoused a genetic algorithm, and Nuirnberger [44], who proposeda template based approach to leam a structured rule base anda gradient descent based method motivated by the real timerecurrent learning algorithm [39] to optimize the parametersof the leamed rule base. The interpretability of the fuzzy setsof this model is ensured by the use of coupled weights inthe consequents (fuzzy sets, which are assigned to the samelinguistic terms share their parameters) and in the antecedents.Furthermore, constraints can be defined, which have to beobserved by the leaming method, e.g. that the fuzzy setshave to cover the considered state space. An example of thenetwork structure is given in Fig. 8. However, the template

Fig. 8. Possible structure of the recufrent neuro-fuzzy system proposedby Niimberger in 2001 (using one time-delayed and one hierarchical feed-back). The first row of neurons defines the input variables, the second rowthe membership functions of the antecedents, the third row the fuzzy rules,and the fourth row the output variables. The membership functions of theconsequents that are shared by rules are represented by coupled links fromthe rule to the output layer.

based learning approach still had insufficiencies due to theuse of a heuristic that created inner fuzzy sets. Therefore, in[45] a slightly modified approach was proposed, that learnedthe rule base using a genetic algorithm.

Further, more recent models have been proposed by, e.g.,[46], [47] that tackle specific problems of the learning processor specific applications.

IV. OUTLOOKStarting with neural network oriented architectures like

ARIC and NNDFR neuro-fuzzy system quickly developed intonetwork representations of fuzzy systems like we can see inANFIS and NEFCON. In the second half of the 1990s we sawa lot of specific architectures for approximation, classificationand control where neuro-fuzzy systems have covered a broadarea of problems.

Currently, there is still research to do in the area of adaptivecontrol, where the combination of reinforcement learningmethods with neuro-fuzzy architectures has made a lot ofprogress more recently.

The same holds for applications in classification that becamemore and more important with the growing interest in datamining. In this area questions of how to completely automatethe learning process and how to guarantee a certain level ofinterpretability remain to be important issues.

V. REMARKSWe like to apologize by all researchers we did not men-

tioned in this - for this broad topic - short article. Thiswould have been impossible. We tried only to mark majordevelopments in this area and may have missed some thatwould be considered by others as major contribution.

102

Page 6: [IEEE NAFIPS 2005 - 2005 Annual Meeting of the North American Fuzzy Information Processing Society - Detroit, MI, USA (26-28 June 2005)] NAFIPS 2005 - 2005 Annual Meeting of the North

REFERENCES

[1] J.-S. Roger Jang and Eiji Mizutani, "Levenberg-marquardt method foranfis learning,," in Proc. Biennial Conference of the North AmericanFuzzy Information Processing Society NAFIPS'96, Berkeley, CA, 1996,pp. 87-91, IEEE.

[2] Detief Nauck, Frank Klawonn, and Rudolf Kruse, Foundations ofNeuro-Fuzzy Systems, Wiley, Chichester, 1997.

[3] Witold Pedrycz. A. Kandel, and Yan-Qing Zhang, "Neurofuzzy sys-tems," in Fuzzy Systems Modeling and Control, Hung T. Nguyen andMichio Sugeno, Eds., The Handbooks on Fuzzy Sets, chapter 9, pp.311-380. Kluwer Academic Publishers, Norwell, MA, 1998.

[4] David A. White and Donald A. Sofge, Eds., Handbook of IntelligentControl. Neural, Fuzzy, and Adaptive Approaches, Van NostrandReinhold, New York, 1992.

[5] Andrew G. Barto, Richard S. Sutton, and Charles W. Anderson,"Neuronlike adaptive elements that can solve difficult leaming controlproblems," IEEE Transactions on Systems, Man, and Cybemnetics, vol.13, pp. 834-846, 1983.

[6] Detlef Nauck, "Adaptive rule weights in neuro-fuzzy systems," NeuralComputinig & Applications, vol. 9, no. 1, pp. 60-70, 2000.

[7] Hamid R. Berenji and Pratap Khedkar, "Fuzzy rules for guidingreinforcement learning," in Int. Conf. on Information Processing andManagement of Unicertaintv in Knowledge-Based Systems (IPMU'92),Mallorca, jul 1992. pp. 511-514.

[8] Hamid R. Berenji and Pratap Khedkar, "Learning and tuning fuzzylogic controllers through reinforcements," IEEE Transactions on NeuralNetworks, vol. 3, pp. 724-740, sep 1992.

[9] Hamid R. Berenji, Robert N. Lea, Yahswant Jani, Pratap Khedkar,Anil Malkani, and Jeffrey Hoblit, "Space shuttle attitude control byreinforcement leaming and fuzzy logic,' in Proc. IEEE Int. Conf onNeural Netorks 1993, San Francisco, mar 1993, pp. 1396-1401.

[10] Detlef Nauck and Rudolf Kruse, "Nefcon-i: An x-window basedsimulator for neural fuzzy controllers," in Proc. IEEE itt. Conf NeuralNetworks 1994 at IEEE WCCI'94, Orlando, FL, 1994, pp. 1638-1643.

[11] Andreas Niirnberger, Detlef Nauck, and Rudolf Kruse, "Neuro-fuzzycontrol based on the nefcon-model," Soft Computing, vol. 2, no. 4, pp.182-186, feb 1999.

[12] Priscila Dias, Karla Figueiredo, Marley Vellasco, Marco Aurlio Pacheco,and Carlos H. Barbosa, "Reinforcement learning hierarchical neuro-fuzzy model for autonomous robots," in Proc. of 2nzd lnzt. Conifon Autonomnous Robots and Agents (ICARA 2004), Palmerston. NewZealand, 2004, pp. 107-112.

[13] Karla Figueiredo, Marley Vellasco, Marco Pacheco, and Flavio Souza,"Reinforcement leaming hierarchical neuro-fuzzy politree model forcontrol of autonomous agents," in Proc. of 4th ltit. Conf on Hybridlitelligent Systems (HIS'04). Kitakyushu, Japan, 2004, pp. 107-112.

[14] Martin Riedmiller, Selbstandig lernende neuronale Steuerungen, VDI-Verlag GmbH, Dusseldorf, 1997.

[15] J.-S. Roger Jang, "Fuzzy modeling using generalized neural networksand kalman filter algorithm," in Proc. Ninth National Conf on ArtificialIntelligence (AAAI-91), jul 1991, pp. 762-767.

[16] J.-S. Roger Jang. "Self-learning fuzzy controller based on temporalback-propagation," IEEE Tranisactions on Neural Networks, vol. 3, pp.714-723, 1992.

[17] J.-S. Roger Jang. "Anfis: Adaptive-network-based fuzzy inferencesystems," IEEE Transactions on Systems, Man, and Cybernetics, vol.23. pp. 665-685, 1993.

[18] Detlef Nauck and Rudolf Kruse, "Neuro-fuzzy systems for functionapproximation," Fuzzy Sets and Systems, vol. 101, pp. 261-271, 1999.

[19] Hideyuki Takagi and Isao Hayashi, "NN-driven fuzzy reasoning," Int.J. Approximate Reasoning, vol. 5, pp. 191-212, 1991.

[20] Saman K. Halgamuge and Manfred Glesner, "A fuzzy-neural approachfor pattern classification with the generation of rules based on supervisedlearning," in Proc. Neuro-Nimes 92, Nanterre, 1992, pp. 167-173.

[21] Saman K. Halgamuge and Manfred Glesner, "Neural networks indesigning fuzzy systems for real world applications,;' Fuzzy Sets andSystems, vol. 65, pp. 1-12, 1994.

[22] Saman K. Halgamuge, Advanced Methods for Fusion of Fuzy Systemsand Neural Networks in Intelligent Data Processing, PhD Thesis, Ph.D.thesis, Technische Hochschule Darmstadt, 1995.

[23] Detlef Nauck and Rudolf Kruse, "Nefclass - a neuro-fuzzy approach forthe classification of data," in Applied Computing 1995. Proc. 1995ACMSymposium on Applied Computing, Nashville, Feb. 26-28, K. M. George,

Janice H. Carrol, Ed Deaton, Dave Oppenheim, and Jim Hightower, Eds.,pp. 461-465. ACM Press, New York, feb 1995.

[24] Detlef Nauck and Rudolf Kruse, "A neuro-fuzzy method to leam fuzzyclassification rules from data," Fuzzy Sets and Systems, vol. 89, pp.277-288, 1997.

[25] Detlef Nauck, "Fuzzy data analysis with NEFCLASS," Int. J. Approx-imate Reasoning, vol. 32, pp. 103-130, 2003.

[26] Li-Xin Wang and Jerry M. Mendel, "Generating fuzzy rules by learningfrom examples," IEEE Trans. Syst., Man, Cybern., vol. 22, no. 6, pp.1414-1427, 1992.

[27] Detlef Nauck, "Neuro-fuzzy learning with symbolic and numeric data,"Soft Computing, vol. 8, no. 6, pp. 383-396, 2004.

[28] N. Tschichold-Giirman, "Generation and improvement of fuzzy clas-sifiers with incremental learning using fuzzy rulenet," in AppliedComputing 1995. Proc. 1995 ACM Symposium on Applied Computing,Nashville, Feb. 26-28, K. M. George, Janice H. Carrol, Ed Deaton, DaveOppenheim, and Jim Hightower, Eds., pp. 466-470. ACM Press, NewYork, feb 1995.

[29] V. Dabija and N. Tschichold Gurman, "A framework for combiningsymbolic and connectionist learning with equivalent concept descrip-tions," in Proc. 1993 lit. Joint Conf on Neural Networks (IJCNN-93),Nagoya, 1993.

[301 N. Tschichold-Giirman, RuleNet - A New Knowledge-Based ArtificialNeural Nertvork Model with Application Examples in Robotics, Ph.D.thesis, ETH Zurich, Zurich, 1996.

[31] P. K. Simpson, "Fuzzy min-max neural networks - part 1: Classification,"IEEE Transactions on Neural Networks, vol. 3, pp. 776-786, 1992.

[32] P. K. Simpson, "Fuzzy min-max neural networks - part 2: Clustering,"IEEE Transactions on Fuzzy Systemns, vol. 1, pp. 32-45, feb 1992.

[33] Michael Berthold and Klaus-Peter Huber, "Constructing fuzzy graphsfrom examples," Int. J. Intelligent Data Analysis, vol. 3, no. 1, 1999,Electronic journal (http://www.elsevier.com/locate/ida).

[34] Michael Berthold, "Mixed fuzzy rule formation," Int. J. ApproximateReasoning, vol. 32, pp. 67-84, 2003.

[35] F. J. Pineda, "Recurrent backpropagation networks," in Backpropaga-tion: Thieoly, Architectures and Applications, Y Chauvin and David E.Rumelhart, Eds., pp. 99-135. Lawrence Erlbaum Associates, Hillsdale,NJ, 1995.

[36] Larry R. Medsker and Lakhmi C. Jain, Recurrent Neural Networks:Design and Applications, CRC Press. 1999.

[37] Stefan Wermter, "Neural fuzzy preference integration using neuralpreference moore machines," International Jour7nal of Neural Systems,vol. 10, no. 4, pp. 287-310, 2000.

[38] V. Gorrini and Hugues Bersini, "Recurrent fuzzy systems," in Proc. ofthe 3rd Cotiferenzce on Fuzzy Systems (FUZZ-IEEE 94), Orlando, 1994,IEEE.

[39] R. J. Williams and D. Zipser, "Experimental analysis of the real timerecurrent learning algorithm," Conniection Science, vol. 1, pp. 87-111,1989.

[40] J. B. Theocharis and G. Vachtsevanos, "Recursive learning algorithmsfor training fuzzy recurrent models," Initernational Joutnal of Intelli-gence Systems, vol. 11, no. 12, pp. 10591098, 1996.

[41] J. Zhang and A. J. Morris, "Recurrent neuro-fuzzy networks fornonlinear process modelling," IEEE Tranisactions on Neural Networks,vol. 10, no. 2, pp. 313-326, 1999.

[42] Ching-Hung Lee and Ching-Cheng Teng, "Indentification and controlof dynamic systems using recurrent fuzzy neural networks," IEEETransactions on Fuzzy Systems, vol. 8, no. 4, pp. 349-366, august 2000.

[43] Hartmut Surmann and Michail Maniadakis, "Learning feed-forwardand recurrent fuzzy systems: A genetic approach," Journal of SystemsArchitecture, vol. 47, no. 7, pp. 649-662, 2001.

[44] Andreas Nirnberger, "A hierarchical recurrent neuro-fuzzy system," inProc. of Joint 9th IFSA World Congress and 20th NAFIPS InternationalC'onference. 2001. pp. 1407-1412, IEEE.

[45] Andreas Nuinberger, "Approximation of dynamic systems using recur-rent neuro-fuzzy techniques;" Soft Computing, vol. 8, no. 6, pp. 428-442,2004.

[46] Chih-Min Lin and Chun-Fei Hsu, "Identification of dynarnic systemsusing recurrent fuzzy neural network," in Proc. of Joint 9th IFSA WorldCongress and 20th NAFIPS International Coniference, 2001, pp. 2671-2675.

[47] Jeen-Shing Wang and C. S. G. Lee, "Self-adaptive recurrent neuro-fuzzycontrol of an autonomous underwater vehicle," IEEE Transactions onRobotics and Automation, vol. 19, no. 2. pp. 283-295, 2003.

103


Recommended