Upload
others
View
11
Download
0
Embed Size (px)
Citation preview
Information Sciences 180 (2010) 1434–1457
Contents lists available at ScienceDirect
Information Sciences
journal homepage: www.elsevier .com/locate / ins
Artificial neural network approach for solving fuzzy differential equations
Sohrab Effati a,*, Morteza Pakdaman b
a Department of Applied Mathematics, Ferdowsi University of Mashhad, Mashhad, Iranb Sama Organization (affiliated with Islamic Azad University)-Mashhad Branch, Mashhad, Iran
a r t i c l e i n f o a b s t r a c t
Article history:Received 23 May 2007Received in revised form 19 December 2009Accepted 21 December 2009
Keywords:Fuzzy differential equationsFuzzy Cauchy problemArtificial neural networks
0020-0255/$ - see front matter � 2009 Elsevier Incdoi:10.1016/j.ins.2009.12.016
* Corresponding author.E-mail addresses: [email protected] (S. Effati), pak
The current research attempts to offer a novel method for solving fuzzy differential equa-tions with initial conditions based on the use of feed-forward neural networks. First, thefuzzy differential equation is replaced by a system of ordinary differential equations. A trialsolution of this system is written as a sum of two parts. The first part satisfies the initialcondition and contains no adjustable parameters. The second part involves a feed-forwardneural network containing adjustable parameters (the weights). Hence by construction, theinitial condition is satisfied and the network is trained to satisfy the differential equations.This method, in comparison with existing numerical methods, shows that the use of neuralnetworks provides solutions with good generalization and high accuracy. The proposedmethod is illustrated by several examples.
� 2009 Elsevier Inc. All rights reserved.
1. Introduction
Uncertainty is an attribute of information, [28] and the use of fuzzy differential equations (FDEs) is a natural way to modeldynamic systems with embedded uncertainty. Most practical problems can be modeled as FDEs (e.g. [5,8] and Section 3.2).The method of fuzzy mapping was initially introduced by Chang and Zadeh [10]. Later, Dubois and Prade [11] presented aform of elementary fuzzy calculus based on the extension principle [27]. Puri and Ralescue [23] suggested two definitions forthe fuzzy derivative of fuzzy functions. The first method was based on H-difference notation and was further investigated byKaleva [16]. Several approaches were later proposed for FDEs and the existence of their solutions (e.g. [15,19,21,24,26]). Theapproach based on H-derivative has the disadvantage that it leads to solutions which have an increasing length of their sup-port. This shortcoming was resolved by interpreting the FDE as a family of differential inclusions. Later, the authors of [6,7]introduced the concept of generalized differentiability. According to this new definition, the solution of the FDE may havedecreasing length of its support. Other researchers have proposed several approaches to the solutions of FDE (e.g. [9,19]).
Another group of researchers tried to extend some numerical methods to solve FDEs (e.g. [1,12,13]) such as Runge–Kuttamethod [2], Adomian method [4], predictor–corrector method and multi-step methods [3]. These methods are extended ver-sions of the equivalent methods for solving ordinary differential equations (ODEs).
Lagaris et al. [17] used artificial neural networks to solve ordinary differential equations (ODEs) and partial differentialequations (PDEs) for both boundary value problems and initial value problems. They used multilayer perceptron to estimatethe solution of differential equation. Their neural network model was trained over an interval (over which the differentialequation must be solved), so the inputs of the neural network model were the training points. The comparison of their meth-od with the existing numerical methods shows that their method was more accurate and the solution had also more gen-eralizations. The ability of neural networks in function approximation is our main objective. In this paper, we willconstruct a new model with the use of neural networks to obtain a solution for FDE. In this new model, the inputs of theneural network are the training points as well as a parameter r which shows the domain of uncertainty.
. All rights reserved.
[email protected] (M. Pakdaman).
S. Effati, M. Pakdaman / Information Sciences 180 (2010) 1434–1457 1435
In Section 2, the basic notations of fuzzy numbers, fuzzy derivative and fuzzy functions are briefly presented. In Section 3,fuzzy differential equations and their applications are introduced; in addition, a general Cauchy problem is defined. In Sec-tion 4, the proposed new method (based on the use of a feed-forward neural network) is described. In Section 5, the appli-cability of the method is illustrated by several examples in which the exact solution and the computed results are comparedwith each other. In order to show the applicability of the method, a nonlinear FDE and a nonlinear FDE containing a fuzzyparameter are solved. Also, a problem in electrical circuit analysis is solved. Finally, Section 6 presents concluding remarks.
2. Preliminaries
Definition 2.1 (see [25]). A fuzzy number u is completely determined by any pair u ¼ ðu; �uÞ of functions uðrÞ; �uðrÞ : ½0;1� ! R,satisfying the three conditions:
(i) uðrÞ is a bounded, monotonic, increasing (nondecreasing) left-continuous function for all r 2 ð0;1� and right-continu-ous for r ¼ 0.
(ii) �uðrÞ is a bounded, monotonic, decreasing (nonincreasing) left-continuous function for all r 2 ð0;1� and right-continu-ous for r ¼ 0.
(iii) For all r 2 ð0;1� we have: uðrÞ 6 �uðrÞ.
For every u ¼ ðu; �uÞ; v ¼ ðv ; �vÞ and k > 0 we define addition and multiplication as follows:
ðuþ vÞðrÞ ¼ uðrÞ þ vðrÞ; ð1Þ
ðuþ vÞðrÞ ¼ �uðrÞ þ �vðrÞ; ð2ÞðkuÞðrÞ ¼ kuðrÞ; ðkuÞðrÞ ¼ k�uðrÞ: ð3Þ
The collection of all fuzzy numbers with addition and multiplication as defined by Eqs. (1)–(3) is denoted by E1. For0 < r 6 1, we define the r-cuts of fuzzy number u with ½u�r ¼ fx 2 RjuðxÞP rg and for r ¼ 0, the support of u is defined as½u�0 ¼ fx 2 RjuðxÞ > 0g.
Definition 2.2. The distance between two arbitrary fuzzy numbers u ¼ ðu; �uÞ and v ¼ ðv ; �vÞ is defined as follows:
dðu;vÞ ¼ supr2½0;1�fmax½juðrÞ � vðrÞj; j�uðrÞ � �vðrÞj�g: ð4Þ
It is shown [22] that ðE1; dÞ is a complete metric space.
Definition 2.3. The function f : R1 ! E1 is called a fuzzy function. Now if, for an arbitrary fixed t̂ 2 R1 and e > 0 there exists ad > 0 such that:
jt � t̂j < d) d½f ðtÞ; f ðt̂Þ� < e
then f is said to be continuous. Note that d is the metric which is defined in Definition 2.2 (In this article we simply replace R1
by ½t0;T�).
Definition 2.4. Let u; v 2 E1. If there exists w 2 E1 such that u ¼ v þw then w is called the H-difference of u;v and it isdenoted by u� v .
Definition 2.5. A function f : ða; bÞ ! E1 is called H-differentiable at t̂ 2 ða; bÞ if, for h > 0 sufficiently small, there exist the H-differences f ð̂t þ hÞ � f ð̂tÞ; f ð̂tÞ � f ð̂t � hÞ, and an element f 0 ð̂tÞ 2 E1 such that:
0 ¼ limh!0þ
df ð̂t þ hÞ � f ð̂tÞ
h; f 0 ð̂tÞ
� �¼ lim
h!0þd
f ð̂tÞ � f ð̂t � hÞh
; f 0 ð̂tÞ� �
: ð5Þ
Then f 0 ð̂tÞ is called the fuzzy derivative of f at t̂.
3. Fuzzy differential equations and applications
3.1. Fuzzy differential equations
In this section, a first order fuzzy differential equation is defined. Then it is replaced by its equivalent parametric form,and the new system, which contains two ordinary differential equations, is solved. A fuzzy differential equation of the firstorder is in the following form:
x0ðtÞ ¼ f ðt; xðtÞÞ; ð6Þ
1436 S. Effati, M. Pakdaman / Information Sciences 180 (2010) 1434–1457
where x is a fuzzy function of t and f ðt; xÞ is a fuzzy function of the crisp variable t and the fuzzy variable x. Here x0 is the fuzzyderivative (according to Definition 2.5) of x. If an initial condition xðt0Þ ¼ x0 (where x0 is a fuzzy number) is given, a fuzzyCauchy problem [20] of the first order is obtained as follows:
x0ðtÞ ¼ f ðt; xðtÞÞ; t 2 ½t0; T�;xðt0Þ ¼ x0:
�ð7Þ
It is clear that the fuzzy function f ðt; xÞ is the mapping f : R1 � E1 ! E1. Sufficient conditions for the existence of a uniquesolution to Eq. (7), which have been given by Kaleva [16], are that:
� f is continuous,� A Lipschitz condition dðf ðt; xÞ; f ðt; yÞÞ 6 Ldðx; yÞ satisfied for some L > 0 ðx; y 2 E1Þ.
Now it is possible to replace (7) by the following equivalent system:
x0ðtÞ ¼ f ðt; xÞ ¼ Fðt; x; �xÞ; xðt0Þ ¼ x0;
�x0ðtÞ ¼ f ðt; xÞ ¼ Gðt; x; �xÞ; �xðt0Þ ¼ �x0;
(ð8Þ
where
Fðt; x; �xÞ ¼ minff ðt; uÞju 2 ½x; �x�g;Gðt; x; �xÞ ¼maxff ðt; uÞju 2 ½x; �x�g:
�ð9Þ
The parametric form of system (8) is given by:
x0ðt; rÞ ¼ F½t; xðt; rÞ; �xðt; rÞ�; xðt0; rÞ ¼ x0ðrÞ;�x0ðt; rÞ ¼ G½t; xðt; rÞ; �xðt; rÞ�; �xðt0; rÞ ¼ �x0ðrÞ;
�ð10Þ
where t 2 ½t0; T� and r 2 ½0;1�. Now with a discretization of the interval ½t0; T�, a set of points ti; i ¼ 1;2; . . . ;m are obtained.Thus for an arbitrary ti 2 ½t0; T�, system (10) can be rewritten as:
x0ðti; rÞ � F½ti; xðti; rÞ; �xðti; rÞ� ¼ 0;�x0ðti; rÞ � G½ti; xðti; rÞ; �xðti; rÞ� ¼ 0
�ð11Þ
with initial conditions:
xðt0; rÞ ¼ x0ðrÞ; �xðt0; rÞ ¼ �x0ðrÞ; 0 6 r 6 1:
In some cases, the system given by Eq. (10) can be solved analytically. In most cases, however, an analytical solution can-not be found and a numerical approach must be applied. For each r 2 ½0;1�, Eq. (10) presents an ordinary Cauchy problem forwhich any converging classical numerical procedure may be applied (e.g. [12]). In Section 4, instead of using the classicalnumerical methods, this novel method, which is based on the use of an artificial neural network, is introduced.
3.2. Applications of fuzzy differential equations
To show the applications of the FDE, some practical examples are mentioned here:
� Electrical engineering: Consider a simple RL circuit. The differential equation corresponding to this electrical circuit is:
didt¼ �R
LiðtÞ þ vðtÞ; ið0Þ ¼ i0;
where R is the circuit resistance and L is a coefficient corresponding to the solenoid. Environmental conditions, inac-curacy in element modelling, electrical noise, leakage and other parameters cause uncertainty in the above-mentioneddifferential equation. Considering it instead as a fuzzy differential equation yields more realistic results. This innova-tion helps to detect unknown conditions in circuit analysis. This differential equation can be modeled as the FDE in Eq.(7). Example 5.6 presents an electrical circuit with vðtÞ ¼ sinðtÞ.
� Population dynamics: The first models for the growing population are the classical Malthus and Verhulst models. Sup-pose that the population obeys the Malthusian equation as follows:
u0ðtÞ ¼ auðtÞ; uð0Þ ¼ u0;
where a is a real number. Due to such noise parameters as demographic and environmental stochasticity (see [5]), thisdifferential equation is a type of FDE, where u0 is a fuzzy initial condition.
� Other applications for FDEs include modeling life expectancy, the population of HIV (see [14]) and other ecologicalmodels, logistics, control theory, and so on.
S. Effati, M. Pakdaman / Information Sciences 180 (2010) 1434–1457 1437
4. Neural networks
The use of neural networks provides solutions with very good generalizability (such as differentiability). On the otherhand, an important feature of multilayer perceptrons is their utility to approximate functions, which leads to a wide appli-cability in most problems.
In this paper, the function approximation capabilities of feed-forward neural networks is used by expressing the trialsolutions for a system (10) as the sum of two terms (see Eq. (13)). The first term satisfies the initial conditions and containsno adjustable parameters. The second term involves a feed-forward neural network to be trained so as to satisfy the differ-ential equations. Since it is known that a multilayer perceptron with one hidden layer can approximate any function to arbi-trary accuracy, the multilayer perceptron is used as the type of the network architecture.
If xTðt; r; pÞ is a trial solution for the first equation in system (10) and �xTðt; r; �pÞ is a trial solution for the second equation inthe system (10) where p and �p are adjustable parameters, (indeed xTðt; r; pÞ and �xTðt; r; �pÞ are approximations of xðt; rÞ and�xðt; rÞ respectively) then a discretized version of system (10) can be converted to the following optimization problem:
min~p
Xm
i¼1
x0Tðti; r;pÞ � F ti; xTðti; r;pÞ; �xTðti; r; �pÞh i� �2
þ �x0Tðti; r; �pÞ � G ti; xTðti; r;pÞ; �xTðti; r; �pÞh i� �2
� �: ð12Þ
(Here ~p ¼ ðp; �pÞ contains all adjustable parameters) subject to initial conditions:
xTðt0; r;pÞ ¼ x0ðrÞ; �xTðt0; r; �pÞ ¼ �x0ðrÞ:
Each trial solution xT and �xT employs one feed-forward neural network for which the corresponding networks are denoted byN and N, with adjustable parameters p and �p, respectively. The trial solutions xT and �xT should satisfy the initial conditions,and the networks must be trained to satisfy the differential equations. Thus xT and �xT can be chosen as follows:
xTðt; r; pÞ ¼ xðt0; rÞ þ ðt � t0ÞNðt; r;pÞ;�xTðt; r; �pÞ ¼ �xðt0; rÞ þ ðt � t0ÞNðt; r; �pÞ;
(ð13Þ
where N and N are single-output feed-forward neural networks with adjustable parameters p and �p, respectively. Here t andr are the network inputs. It is easy to see that in (13), xT and �xT satisfy the initial conditions. According to (13) it is straight-forward to show that:
x0Tðt; r; pÞ ¼ Nðt; r;pÞ þ ðt � t0Þ @N@t ;
�x0Tðt; r; �pÞ ¼ Nðt; r; �pÞ þ ðt � t0Þ @N@t :
(ð14Þ
Now consider a multilayer perceptron having one hidden layer with H sigmoid units and a linear output unit (Fig. 1). Herewe have:
N ¼PHi¼1
v irðziÞ; zi ¼ wi1t þwi2r þ ui;
N ¼PHi¼1
v irð�ziÞ; �zi ¼ �wi1t þ �wi2r þ ui;
8>>><>>>:
ð15Þ
where rðzÞ is the sigmoid transfer function. The following is obtained:
@N@t ¼
PHi¼1
v iwi1r0ðziÞ;
@N@t ¼
PHi¼1
v i �wi1r0ð�ziÞ;
8>>><>>>:
ð16Þ
Fig. 1. Architecture of the perceptron.
1438 S. Effati, M. Pakdaman / Information Sciences 180 (2010) 1434–1457
where r0ð�ziÞ is the first derivative of the sigmoid function. N and N have the same architecture as shown in Fig. 1.In Fig. 1, A ¼ ðt; rÞt is the input vector, w; �w (H � 2 matrices) are the weights of input layers and U;U (H � 1 matrices) are
the bias vectors of input units. V and V are the weight vectors (H � 1 matrices) of output units and Lin is the linear function.Sig is the sigmoid transfer function given as follows:
rðzÞ ¼ 11þ e�z
:
In network N : �wi1 and �wi2 denote respectively the weights from the inputs t and r to the hidden unit i; �v i denotes theweight from the hidden unit i to the output, and �ui denotes the bias of hidden unit i.
In network N : wi1 and wi2 denote respectively the weights from the inputs t and r to the hidden unit i, v i denotes theweight from the hidden unit i to the output, and ui denotes the bias of hidden unit i.
For each iteration, r is fixed and we can solve the problem for an arbitrary r 2 ½0;1�. Now if we replace (12) by (14), theconstrained optimization problem (12) will be converted to the following unconstrained optimization problem:
min~p
Xm
i¼1
Nðti; r; pÞ þ ðti � t0Þ@N@t� F ti; xTðti; r;pÞ; �xTðti; r; �pÞh i� �2
(
þ Nðti; r; �pÞ þ ðti � t0Þ@N@t� G ti; xTðti; r; pÞ; �xTðti; r; �pÞ
h i !29=;: ð17Þ
Here~p ¼ ðp; �pÞ contains all adjustable parameters (weights of input and output layers and biases ) of the two networks Nand N. To minimize this unconstrained optimization problem, minimization techniques such as the steepest descent methodand the conjugate gradient or quasi-Newton methods can be employed. The Newton method is one of the important algo-rithms in nonlinear optimization. The main disadvantage of the Newton method is that it is necessary to evaluate the secondderivative matrix (Hessian matrix).
Quasi-Newton methods were originally proposed by Davidon in 1959 and were later developed by Fletcher and Powell(1963). The most fundamental idea in quasi-Newton methods is the requirement to calculate an approximation of the Hes-sian matrix. Here the quasi-Newton BFGS (Broyden–Fletcher–Goldfarb–Shanno) method is used. This method is quadrati-cally convergent (for more details see [18]).
After the optimization step, optimal values of the weights are obtained. Thus by replacing the optimal parameters p and �pin (13), the trial solution xT ¼ ðxT ; �xTÞ will be the approximated solution of FDE (7).
Remark 4.1. The proposed method has two main advantages:
� First, the approximated solution is very close to the real solution because neural networks are very good approxima-tors. By comparing the results of the numerical methods (e.g. [3]) with the results obtained in the next section, it canbe noted that the proposed new method has some small errors.
� Second, after solving a problem with this new method, the solution of the FDE is available for each arbitrary point inthe training interval(even between the training points). Indeed, solving the FDE results an approximate function;therefore, it is possible to calculate the answer at every point.
5. Numerical examples
To show the behavior and properties of this new method, six problems will be solved in this section. To minimize theobjective function in (17) the Matlab 7 optimization toolbox was employed using the quasi-Newton BFGS method. Foreach example, the accuracy of the method is illustrated by computing the deviations Eðt; rÞ ¼ xTðt; rÞ � xaðt; rÞ;Eðt; rÞ ¼ �xTðt; rÞ � �xaðt; rÞ (for a constant t and various values of r), where xaðt; rÞ ¼ ðxaðt; rÞ; �xaðt; rÞÞ is the known exact solu-
tion and xTðt; rÞ ¼ ðxTðt; rÞ; �xTðt; rÞÞ is the approximated solution. Note that, for all examples, a multilayer perceptron consist-ing of one hidden layer with ten hidden units and one linear output unit is used. In order to obtain better results (especiallyin the nonlinear cases), more hidden layers or training points can be used. The weights computed using this method are con-vergent. For each example, the computed values of the weights are plotted over a number of iterations.
Example 5.1. Consider the following fuzzy initial value problem:
x0 ¼ xðtÞ; t 2 ½0;1�;xð0Þ ¼ ð0:75þ 0:25r;1:125� 0:125rÞ:
�ð18Þ
Exact solution for t ¼ 1 is:
xð1; rÞ ¼ ðð0:75þ 0:25rÞe; ð1:125� 0:125rÞeÞ; r 2 ½0;1�:
Here the trial solutions in the neural form are as follows:
Table 1Compar
r
00.10.20.30.40.50.60.70.80.91
S. Effati, M. Pakdaman / Information Sciences 180 (2010) 1434–1457 1439
xTðtÞ ¼ ð0:75þ 0:25rÞ þ tNðt; r; pÞ;�xTðtÞ ¼ ð1:125� 0:125rÞ þ tNðt; r; �pÞ;
(ð19Þ
where t 2 ½0;1�. It is easy to show that in (19) the trial solutions satisfy the initial conditions. Fig. 2 shows the exact solutionand the approximated solution for t ¼ 1, and the numerical results can be seen in Table 1. Figs. 3 and 4 show the accuracy ofthe solution, Eð1; rÞ; Eð1; rÞ (for t ¼ 1), respectively.
Figs. 5–8 show the convergence property of the computed values of the weights (for r ¼ 0:5 and t ¼ 1).
Example 5.2. Consider the following fuzzy initial value problem:
x0ðtÞ ¼ 3t2xðtÞ; t 2 ½0;1�;xð0Þ ¼ ð0:5
ffiffiffirp; 0:2
ffiffiffiffiffiffiffiffiffiffiffi1� rp
þ 0:5Þ:
(ð20Þ
The exact solution for t ¼ 1 is: xð1; rÞ ¼ ð0:5ffiffiffirp
e; ð0:2ffiffiffiffiffiffiffiffiffiffiffi1� rp
þ 0:5ÞeÞ. Fig. 9 shows the exact solution and the approximatedsolution for t ¼ 1. Numerical results can be found in Table 2. Figs. 10 and 11 show the accuracy of the solutionEð1; rÞ ¼ ðEð1; rÞ; Eð1; rÞÞ.
Figs. 12–15 show the convergence behaviors for computed values of the weight parameters wi1; wi2; bias u and theweights of output layer v for different numbers of iterations.
Example 5.3. Consider the following fuzzy initial value problem:
x0ðtÞ ¼ �xðtÞ þ t þ 1; t 2 ½0;1�;xð0Þ ¼ ð0:96þ 0:04r;1:01� 0:01rÞ;
�ð21Þ
where r 2 ½0;1�. We can write the parametric form of the problem as follows:
1 1.5 2 2.5 3 3.5 40
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
r
ExactApproximation
Fig. 2. The exact and approximated solution for Example 5.1.
ison of the exact ðxað1; rÞÞ and approximated ðxT ð1; rÞÞ solutions for Example 5.1.
xað1; rÞ xT ð1; rÞ jEðt ¼ 1; rÞj �xað1; rÞ �xT ð1; rÞ jEðt ¼ 1; rÞj
2.038711 2.038800 8.895270e�5 3.058067 3.058127 6.003329e�52.106668 2.106639 2.903725e�5 3.024089 3.024090 1.107843e�62.174625 2.174579 4.693243e�5 2.990110 2.990136 2.582699e�52.242583 2.242607 2.484654e�5 2.956131 2.956140 8.472236e�62.310540 2.310544 4.739291e�6 2.922153 2.922187 3.384699e�52.378497 2.378551 5.478406e�5 2.888174 2.888160 1.443122e�52.446454 2.446482 2.827934e�5 2.854196 2.854238 4.194040e�52.514411 2.514364 4.693161e�5 2.820217 2.820269 5.199824e�52.582368 2.582465 9.712388e�5 2.786239 2.786269 3.025500e�52.650325 2.650339 1.417971e�5 2.752260 2.752106 1.541195e�42.718282 2.718226 5.574679e�5 2.718282 2.718310 2.787937e�5
0 0.2 0.4 0.6 0.8 1−2
−1.5
−1
−0.5
0
0.5
1
1.5
2x 10−3
r
Error
Fig. 3. Eð1; rÞ for Example 5.1.
0 0.2 0.4 0.6 0.8 1−2
−1.5
−1
−0.5
0
0.5
1
1.5
2x 10−3
r
Error
Fig. 4. Eð1; rÞ for Example 5.1.
0 2 4 6 8 10 12 14 16 18 20−2
−1.5
−1
−0.5
0
0.5
1
1.5
2
Wi,1
Number of iterations
Fig. 5. Convergence of the weights wi1 for Example 5.1.
1440 S. Effati, M. Pakdaman / Information Sciences 180 (2010) 1434–1457
0 2 4 6 8 10 12 14 16 18 20−0.2
0
0.2
0.4
0.6
0.8
1
1.2
1.4
Wi,2
Number of iterations
Fig. 6. Convergence of the weights wi2 for Example 5.1.
0 5 10 15 20
−2
−1
0
1
2
3
U
Number of iterations
Fig. 7. Convergence of the bias u for Example 5.1.
0 5 10 15 20−3
−2.5
−2
−1.5
−1
−0.5
0
0.5
1
1.5
2
2.5
V
Number of iterations
Fig. 8. Convergence of the weights v for Example 5.1.
S. Effati, M. Pakdaman / Information Sciences 180 (2010) 1434–1457 1441
0 0.5 1 1.5 2 2.5 3 3.5 40
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1TrialExact
Fig. 9. The exact and approximated solution for Example 5.2.
Table 2Comparison of the exact ðxað1; rÞÞ and approximated ðxT ð1; rÞÞ solutions for Example 5.2.
r xað1; rÞ xT ð1; rÞ jEðt ¼ 1; rÞj �xað1; rÞ �xT ð1; rÞ jEðt ¼ 1; rÞj
0 0 3.787978e�7 3.787978e�7 1.902797 1.902417 3.798493e�40.1 0.4297981 0.4296523 1.457549e�4 1.874899 1.872192 2.706702e�30.2 0.6078263 0.6077049 1.213900e�4 1.845402 1.843975 1.427181e�30.3 0.7444321 0.7443416 9.049358e�5 1.813996 1.814533 5.365383e�40.4 0.8595962 0.8603474 7.511922e�4 1.780255 1.780520 2.649743e�40.5 0.9610578 0.9605738 4.839134e�4 1.743564 1.743461 1.027169e�40.6 1.052786 1.052719 6.748295e�5 1.702979 1.703009 2.935789e�50.7 1.137139 1.137410 2.711069e�4 1.656914 1.657742 8.278025e�40.8 1.215653 1.215635 1.773549e�5 1.602271 1.602701 4.297191e�40.9 1.289394 1.289925 5.310672e�4 1.531060 1.532301 1.240992e�31 1.359141 1.359143 2.244555e�6 1.359141 1.359470 3.295712e�4
0 0.2 0.4 0.6 0.8 1−5
−4
−3
−2
−1
0
1
2
3
4
5x 10−3
r
Error
Fig. 10. Eð1; rÞ for Example 5.2.
1442 S. Effati, M. Pakdaman / Information Sciences 180 (2010) 1434–1457
x0ðtÞ ¼ F½t; xðt; rÞ; �xðt; rÞ�; xð0Þ ¼ 0:96þ 0:04r;�x0ðtÞ ¼ G½t; xðt; rÞ; �xðt; rÞ�; �xð0Þ ¼ 1:01� 0:01r
�ð22Þ
0 0.2 0.4 0.6 0.8 1−5
−4
−3
−2
−1
0
1
2
3
4
5x 10−3
r
Error
Fig. 11. Eð1; rÞ for Example 5.2.
0 5 10 15 20−4
−3
−2
−1
0
1
2
3
4
5
Wi,1
Number of iterations
Fig. 12. Convergence of the weights wi1 for Example 5.2.
S. Effati, M. Pakdaman / Information Sciences 180 (2010) 1434–1457 1443
in which t 2 ½0;1� and F; G satisfy (9). The neural form of the trial solutions is as follows:
xTðt; rÞ ¼ ð0:96þ 0:04rÞ þ tNðt; r;pÞ;�xTðt; rÞ ¼ ð1:01� 0:01rÞ þ tNðt; r; �pÞ;
(ð23Þ
where r 2 ½0;1�. In (23) the trial solutions satisfy the initial conditions. Now we replace (17) by (22) and (23), so the uncon-strained optimization problem can be solved with the quasi-Newton BFGS method. Fig. 16 and Table 3 show the exact solu-tion and the approximated solution for t ¼ 0:1. Figs. 17 and 18 show the accuracy of the solution Eð0:1; rÞ ¼ðEð0:1; rÞ; Eð0:1; rÞÞ.
Figs. 19–22 show the convergence behaviors for the computed values of the weight parameters wi1;wi2, bias u and theweights of output layer v for different numbers of iterations.
By comparing Table 3 with the numerical solution [3], it is evident that the proposed new method has some small errors(however we can use more training points or more weights to obtain better results). To show the accuracy of the method,note the results at t ¼ 1 shown in Table 4.
In the next example, this method is applied to solve a nonlinear fuzzy differential equation.
0 5 10 15 20−4
−3
−2
−1
0
1
2
3
4
5
Wi,2
Number of iterations
Fig. 13. Convergence of the weights wi2 for Example 5.2.
0 5 10 15 20−6
−4
−2
0
2
4
6
8
u
Number of iterations
Fig. 14. Convergence of the bias u for Example 5.2.
1444 S. Effati, M. Pakdaman / Information Sciences 180 (2010) 1434–1457
Example 5.4. Consider the following nonlinear fuzzy initial value problem:
x0ðtÞ ¼ txðtÞ2; t 2 ½0;1�;xð0Þ ¼ ð1:1þ 0:1r;1:3� 0:1rÞ;
(ð24Þ
where r 2 ½0;1�. This problem is solved with the help of the novel method. The parametric form of the problem is:
x0ðtÞ ¼ F½t; xðt; rÞ; �xðt; rÞ�; xð0Þ ¼ 1:1þ 0:1r;�x0ðtÞ ¼ G½t; xðt; rÞ; �xðt; rÞ�; �xð0Þ ¼ 1:3� 0:1r
�ð25Þ
which t 2 ½0;1� and F; G satisfy (9). The neural form of the trial solutions is as follows:
xTðt; rÞ ¼ ð1:1þ 0:1rÞ þ tNðt; r; pÞ;�xTðt; rÞ ¼ ð1:3� 0:1rÞ þ tNðt; r; �pÞ:
(ð26Þ
Figs. 24 and 25 show the accuracy of the solution Eð0:2; rÞ and Fig. 23 and Table 5 show the exact solution and the approx-imated solution. Figs. 26–29 show the convergence behaviors of the computed values for the weight parameters wi1;wi2, biasu and the weights of output layer v for different numbers of iterations.
0 5 10 15 20−8
−6
−4
−2
0
2
4
6
v
Number of iterations
Fig. 15. Convergence of the weights v for Example 5.2.
0.8 0.85 0.9 0.95 1 1.05 1.1 1.150
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
x(t)
r
ExactApproximation
Fig. 16. The exact and approximated solution for Example 5.3.
Table 3Comparison of the exact ðxað0:1; rÞÞ and approximated ðxT ð0:1; rÞÞ solutions for Example 5.3.
r xað0:1; rÞ xT ð0:1; rÞ jEðt ¼ 0:1; rÞj �xað0:1; rÞ �xT ð0:1; rÞ jEðt ¼ 0:1; rÞj
0 0.9636356 0.9636863 5.067853e�5 1.018894 1.018932 3.753089e�50.1 0.9677558 0.9677945 3.873561e�5 1.017488 1.017530 4.153197e�50.2 0.9718760 0.9718966 2.067133e�5 1.016083 1.016075 7.614799e�60.3 0.9759961 0.9760564 6.021995e�5 1.014677 1.014716 3.922612e�50.4 0.9801163 0.9801237 7.410150e�6 1.013271 1.013312 4.092031e�50.5 0.9842365 0.9842988 0.6225859e�5 1.011866 1.011901 3.571873e�50.6 0.9883567 0.9883914 3.472099e�5 1.010460 1.010485 2.494235e�50.7 0.9924769 0.9925201 4.320305e�5 1.009054 1.009098 4.355865e�50.8 0.9965971 0.9966132 1.614909e�5 1.007649 1.007676 2.703282e�50.9 1.000717 1.000730 1.253534e�5 1.006243 1.006282 3.883441e�51 1.004837 1.004840 2.142460e�6 1.004837 1.004857 1.924814e�5
S. Effati, M. Pakdaman / Information Sciences 180 (2010) 1434–1457 1445
Example 5.5. Consider the following nonlinear FDE:
x0ðtÞ ¼ 3AxðtÞ2; t 2 ½0;0:1�;xð0Þ ¼ ð0:5
ffiffiffirp; 0:2
ffiffiffiffiffiffiffiffiffiffiffi1� rp
þ 0:5Þ;
(ð27Þ
where A ¼ ð1þ r;3� rÞ is a fuzzy parameter (and 0 6 r 6 1). The parametric form of the problem is:
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9−6
−4
−2
0
2
4
x 10−4
Fig. 17. Eð0:1; rÞ for Example 5.3.
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−6
−4
−2
0
2
4
x 10−4
Fig. 18. Eð0:1; rÞ for Example 5.3.
0 2 4 6 8 10 12 14 16 18 20−1
−0.5
0
0.5
1
1.5
2
2.5
3
Wi,1
Number of iterations
Fig. 19. Convergence of the weights wi1 for Example 5.3.
1446 S. Effati, M. Pakdaman / Information Sciences 180 (2010) 1434–1457
0 2 4 6 8 10 12 14 16 18 200.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Wi,2
Number of iterations
Fig. 20. Convergence of the weights wi2 for Example 5.3.
0 5 10 15 20−0.5
0
0.5
1
1.5
2
U
Number of iterations
Fig. 21. Convergence of the bias u for Example 5.3.
0 5 10 15 20−1.5
−1
−0.5
0
0.5
1
V
Number of iterations
Fig. 22. Convergence of the weights v for Example 5.3.
S. Effati, M. Pakdaman / Information Sciences 180 (2010) 1434–1457 1447
1.1 1.15 1.2 1.25 1.3 1.350
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1r
Exact SolutionTrial Solution
Fig. 23. The exact and approximated solution for Example 5.4.
Table 4Comparison of the exact ðxað1; rÞÞ and approximated ðxT ð1; rÞÞ solutions.
r xað1; rÞ xT ð1; rÞ jEðt ¼ 1; rÞj �xað1; rÞ �xT ð1; rÞ jEðt ¼ 1; rÞj
0 1.294404 1.294390 1.419715e�5 1.430318 1.430333 1.451833e�50.2 1.309099 1.309114 1.428876e�5 1.417831 1.417820 1.051050e�50.4 1.323794 1.323811 1.666511e�5 1.405343 1.405329 1.340752e�50.6 1.338489 1.338471 1.861692e�5 1.392855 1.392876 2.060321e�50.8 1.353184 1.353199 1.458449e�5 1.380367 1.380366 1.289424e�61 1.367879 1.367859 2.022175e�5 1.367879 1.367907 2.768503e�5
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
−2
−1.5
−1
−0.5
0
0.5
1
1.5
2
2.5
x 10−4
r
Error
Fig. 24. Eð0:2; rÞ for Example 5.4.
1448 S. Effati, M. Pakdaman / Information Sciences 180 (2010) 1434–1457
x0ðtÞ ¼ F½t; xðt; rÞ; �xðt; rÞ�; xð0Þ ¼ 0:5ffiffiffirp;
�x0ðtÞ ¼ G½t; xðt; rÞ; �xðt; rÞ�; �xð0Þ ¼ 0:2ffiffiffiffiffiffiffiffiffiffiffi1� rp
þ 0:5;
(ð28Þ
where F; G satisfy (9). The neural form of the trial solution is as follows:
xTðt; rÞ ¼ ð0:5ffiffiffirpÞ þ tNðt; r; pÞ;
�xTðt; rÞ ¼ ð0:2ffiffiffiffiffiffiffiffiffiffiffi1� rp
þ 0:5Þ þ tNðt; r; �pÞ:
(ð29Þ
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−2
−1.5
−1
−0.5
0
0.5
1
1.5
2
2.5
x 10−3
r
Error
Fig. 25. Eð0:2; rÞ for Example 5.4.
Table 5Comparison of the exact xað0:2; rÞ and approximated xT ð0:2; rÞ solutions for Example 5.4.
r xað0:2; rÞ xT ð0:2; rÞ jEðt ¼ 0:2; rÞj �xað0:2; rÞ �xT ð0:2; rÞ jEðt ¼ 0:2; rÞj
0 1.124744 1.124766 2.169210e�5 1.334702 1.334767 6.468769e�50.1 1.135201 1.135212 1.097921e�5 1.324163 1.324122 4.109317e�50.2 1.145663 1.145672 9.015843e�6 1.313629 1.313763 1.341154e�40.3 1.156129 1.156145 1.660173e�5 1.303099 1.303090 8.407460e�60.4 1.166598 1.166633 3.432918e�5 1.292573 1.291846 7.270456e�40.5 1.177073 1.177105 3.185205e�5 1.282051 1.282199 1.475989e�40.6 1.187551 1.187587 3.543493e�5 1.271534 1.271621 8.677992e�50.7 1.198034 1.198045 1.129344e�5 1.261021 1.261074 5.336968e�50.8 1.208521 1.208543 2.214695e�5 1.250513 1.250741 2.281000e�40.9 1.219012 1.219025 1.270053e�5 1.240008 1.239971 3.765595e�51 1.229508 1.229525 1.689782e�5 1.229508 1.229564 5.614828e�5
0 2 4 6 8 10 12
−2
−1
0
1
2
3
4
Wi,1
Number of iterations
Fig. 26. Convergence of the weights wi1 for Example 5.4.
S. Effati, M. Pakdaman / Information Sciences 180 (2010) 1434–1457 1449
Figs. 31 and 32 show the accuracy of the solution Eð0:1; rÞ and Fig. 30 and Table 6 show the exact solution and the approx-imated solution. Figs. 33–36 show the convergence behaviors of the computed values for the weight parameters wi1; wi2,bias u and the weights of the output layer v for different numbers of iterations.
0 5 10 15−0.2
0
0.2
0.4
0.6
0.8
1
1.2
Wi,2
Number of iterations
Fig. 27. Convergence of the weights wi2 for Example 5.4.
0 2 4 6 8 10 12−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
U
Number of iterations
Fig. 28. Convergence of the bias u for Example 5.4.
0 2 4 6 8 10 12−3
−2
−1
0
1
2
3
4
V
Number of iterations
Fig. 29. Convergence of the weights v for Example 5.4.
1450 S. Effati, M. Pakdaman / Information Sciences 180 (2010) 1434–1457
0 0.5 1 1.5 20
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
r
Exact SolutionTrial Solution
Fig. 30. The exact and approximated solution for Example 5.5.
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
−1.5
−1
−0.5
0
0.5
1
1.5
2
x 10−5
r
Error
Fig. 31. Eð0:1; rÞ for Example 5.5.
S. Effati, M. Pakdaman / Information Sciences 180 (2010) 1434–1457 1451
Example 5.6. Consider an electrical circuit (LR circuit) with an AC source. The current equation of this circuit can be writtenas follows:
i0ðtÞ ¼ � RL iðtÞ þ vðtÞ; t 2 ½0;1�;
ið0Þ ¼ ð0:96þ 0:04r;1:01� 0:01rÞ;
(ð30Þ
where R is the circuit resistance, L is a coefficient, corresponding to the solenoid and 0 6 r 6 1. Suppose thatvðtÞ ¼ sinðtÞ; R ¼ 1 ohm and L ¼ 1H, so (30) can be written as:
i0ðtÞ ¼ �iðtÞ þ sinðtÞ; t 2 ½0;1�;ið0Þ ¼ ð0:96þ 0:04r;1:01� 0:01rÞ:
(ð31Þ
The parametric form of the problem is:
i0ðtÞ ¼ F½t; iðt; rÞ;�iðt; rÞ�; ið0Þ ¼ 0:96þ 0:04r;�i0ðtÞ ¼ G½t; iðt; rÞ;�iðt; rÞ�; �ið0Þ ¼ 1:01� 0:01r;
(ð32Þ
where F; G satisfy (9). Trial solutions are:
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
−1.5
−1
−0.5
0
0.5
1
1.5
2
2.5
3x 10−4
r
Error
Fig. 32. Eð0:1; rÞ for Example 5.5.
Table 6Comparison of the exact ðxað0:1; rÞÞ and approximated ðxT ð0:1; rÞÞ solutions for Example 5.5.
r xað0:1; rÞ xT ð0:1; rÞ jEðt ¼ 0:1; rÞj �xað0:1; rÞ �xT ð0:1; rÞ jEðt ¼ 0:1; rÞj
0 0 1.521837e�8 1.521837e�8 1.891892 1.891879 1.325872e�50.1 0.1668180 0.1668180 9.006475e�8 1.724647 1.724595 5.245013e�50.2 0.2431826 0.2431828 1.333566e�7 1.579772 1.579803 3.013005e�50.3 0.3066089 0.3066094 4.564952e�7 1.452423 1.452412 1.173814e�50.4 0.3646604 0.3646614 1.037906e�6 1.338857 1.338843 1.477213e�50.5 0.4204459 0.4204467 7.377851e�7 1.236037 1.236061 2.468082e�50.6 0.4757399 0.4757396 2.985106e�7 1.141303 1.141312 9.430248e�60.7 0.5317856 0.5317851 5.020650e�7 1.052001 1.052007 5.657456e�60.8 0.5895990 0.5895982 8.304654e�7 0.9647689 0.9647744 5.578531e�60.9 0.6501168 0.6501146 2.172205e�6 0.8730387 0.8730405 1.815409e�61 0.7142857 0.7142825 8.217290e�7 0.7142857 0.7142830 2.720800e�6
2 4 6 8 10 120
10
20
30
40
50
60
Wi,1
Number of iterations
Fig. 33. Convergence of the weights wi1 for Example 5.5.
1452 S. Effati, M. Pakdaman / Information Sciences 180 (2010) 1434–1457
2 4 6 8 10 12−8
−7
−6
−5
−4
−3
−2
−1
0
1
2
Wi,2
Number of iterations
Fig. 34. Convergence of the weights wi2 for Example 5.5.
2 4 6 8 10 12−16
−14
−12
−10
−8
−6
−4
−2
0
2
U
Number of iterations
Fig. 35. Convergence of the bias u for Example 5.5.
2 4 6 8 10 12
−30
−20
−10
0
10
20
30
V
Number of iterations
Fig. 36. Convergence of the weights v for Example 5.5.
S. Effati, M. Pakdaman / Information Sciences 180 (2010) 1434–1457 1453
0.6 0.65 0.7 0.75 0.80
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1ExactApproximation
Fig. 37. The exact and approximated solution for Example 5.6.
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.90
0.5
1
1.5
2
2.5
3
3.5
4
4.5
5x 10−4
t
Fig. 38. Eðt;0:5Þ for Example 5.6.
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.90
0.5
1
1.5
2
2.5
3
3.5
4
4.5
5x 10−4
t
Fig. 39. Eðt;0:5Þ for Example 5.6.
1454 S. Effati, M. Pakdaman / Information Sciences 180 (2010) 1434–1457
Table 7Comparison of the exact ðxað0:98; rÞÞ and approximated ðxT ð0:98; rÞÞ solutions for Example 5.6.
r iað0:98; rÞ iT ð0:98; rÞ jEðt ¼ 0:98; rÞj �iað0:98; rÞ �iT ð0:98; rÞ jEðt ¼ 0:98; rÞj
0 0.6274630 0.6274234 3.955934e�5 0.7606858 0.7607083 2.245913e�50.2 0.6419112 0.6418955 1.565424e�5 0.7484895 0.7484859 3.553948e�60.4 0.6563594 0.6563819 2.251957e�5 0.7362931 0.7362615 3.157580e�50.6 0.6708076 0.6709217 1.140533e�4 0.7240968 0.7239564 1.403410e�40.8 0.6852558 0.6852821 2.626915e�5 0.7119004 0.7118550 4.541480e�51 0.6997041 0.6997584 5.437859e�5 0.6997041 0.6996472 5.684037e�5
S. Effati, M. Pakdaman / Information Sciences 180 (2010) 1434–1457 1455
iTðt; rÞ ¼ ð0:96þ 0:04rÞ þ tNðt; r; pÞ;�iTðt; rÞ ¼ ð1:01� 0:01rÞ þ tNðt; r; �pÞ:
(ð33Þ
Figs. 38 and 39 show the accuracy of the solution Eðt; r ¼ 0:5Þ and Fig. 37 and Table 7, show the exact solution and theapproximated solution. Figs. 40–43 show the convergence behaviors of the computed values for the weight parameterswi1; wi2; bias u and the weights of the output layer v for different numbers of iterations.
1 2 3 4 5 6 7 8 9−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
Number of iterations
Fig. 40. Convergence of the weights wi1 for Example 5.6.
1 2 3 4 5 6 7 8 90
0.2
0.4
0.6
0.8
1
1.2
Number of iterations
Fig. 41. Convergence of the weights wi2 for Example 5.6.
1 2 3 4 5 6 7 8 9
−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
1.2
Number of iterations
Fig. 42. Convergence of the bias u for Example 5.6.
1 2 3 4 5 6 7 8 9−2
−1.5
−1
−0.5
0
0.5
1
1.5
2
Number of iterations
Fig. 43. Convergence of the weights v for Example 5.6.
1456 S. Effati, M. Pakdaman / Information Sciences 180 (2010) 1434–1457
6. Concluding remarks
The use of FDEs is a natural way to model dynamical systems under possibilistic uncertainty. In this paper, we presented anew method for solving fuzzy differential equations. We demonstrate, for the first time, the ability of neural networks toapproximate the solutions of FDEs. By comparing our results with the results obtained by using numerical methods (e.g.[3]), it can be observed that the proposed new method yields more accurate approximations. Even better results (speciallyin nonlinear cases) may be possible if one uses more neurons or more training points. Moreover, after solving a FDE the solu-tion is obtainable at any arbitrary point in the training interval (even between training points). The main reason for usingneural networks was their applicability in function approximation. Further research is in progress to apply and extend thisnew approach to solve n-order FDEs as well as a system of FDEs.
Acknowledgments
The authors wish to thank the referees and the Editor-in-Chief for their kind comments and valuable remarks.
References
[1] S. Abbasbandy, Numerical methods for fuzzy differential inclusions, Computers and Mathematics with Applications 48 (2004) 1633–1641.[2] S. Abbasbandy, T. AllahViranloo, Numerical solution of fuzzy differential equation by Runge–Kutta method, Nonlinear Studies 11 (1) (2004) 117–129.
S. Effati, M. Pakdaman / Information Sciences 180 (2010) 1434–1457 1457
[3] T. Allahviranloo, N. Ahmady, E. Ahmady, Numerical solution of fuzzy differential equations by predictor–corrector method, Information Sciences 177(2007) 1633–1647.
[4] E. Babolian, H. Sadeghi, Sh. Javadi, Numerically solution of fuzzy differential equations by Adomian method, Applied Mathematics and Computation149 (2004) 547–557.
[5] L.C. Barros, R.C. Bassanezi, P.A. Tonelli, Fuzzy modeling in population dynamics, Ecological Modeling 128 (2000) 27–33.[6] B. Bede, S.G. Gal, Generalizations of the differentiability of fuzzy-number-valued functions with applications to fuzzy differential equations, Fuzzy Sets
and Systems 151 (2005) 581–599.[7] B. Bede, I.J. Rudas, A.L. Bencsik, First order linear fuzzy differential equations under generalized differentiability, Information Sciences 177 (2007)
1648–1662.[8] J.J. Buckley, T. Feuring, Fuzzy differential equations, Fuzzy Sets and Systems 110 (2000) 43–54.[9] Y.C. Cano, H.R. Flores, On new solutions of fuzzy differential equations, Chaos, Solitons and Fractals 38 (1) (2008) 112–119.
[10] S.S.L. Chang, L. Zadeh, On fuzzy mapping and control, IEEE Transactions on System, Man and Cybernetics 2 (1972) 30–34.[11] D. Dubois, H. Prade, Towards fuzzy differential calculus, Fuzzy Sets and Systems 8 (1982) 225–233.[12] M. Friedman, M. Ma, A. Kandel, Numerical solutions of fuzzy differential and integral equations, Fuzzy Sets and Systems 106 (1999) 35–48.[13] E. Hullermeier, Numerical methods for fuzzy initial value problems, International Journal of Uncertainty Fuzziness Knowledge-Based Systems 7 (1999)
439–461.[14] R.M. Jafelice, L.C. Barros, F. Gomide, Fuzzy modeling in symptomatic HIV virus infected population, Bulletin of Mathematical Biology 66 (2004) 1597–
1620.[15] L.J. Jowers, J.J. Buckley, K.D. Reilly, Simulating continuous fuzzy systems, Information Sciences 177 (2007) 436–448.[16] O. Kaleva, Fuzzy differential equations, Fuzzy Sets and Systems 24 (1987) 301–317.[17] I.E. Lagaris, A. Likas, Artificial neural networks for solving ordinary and partial differential equations, IEEE Transactions on Neural Networks 9 (5)
(1998). September.[18] D.G. Luenberger, Linear and Nonlinear Programming, second ed., Addison-Wesley, 1984.[19] M.T. Mizukoshi, L.C. Barros, Y. Chalco-Cano, H. Roman-Flores, R.C. Bassanezi, Fuzzy differential equations and the extension principle, Information
Sciences 177 (2007) 3627–3635.[20] J.J. Nieto, The Cauchy problem for continuous fuzzy differential equations, Fuzzy Sets and Systems 102 (1999) 259–262.[21] P. Prakash, Existence of solutions of fuzzy neutral differential equations in Banach spaces, Dynamical Systems and Applications 14 (2005) 407–417.[22] M.L. Puri, D. Ralescu, Fuzzy random variables, Journal of Mathematical Analysis and Applications 114 (1986) 409–422.[23] M.L. Puri, D. Ralescu, Differential for fuzzy function, Journal of Mathematical Analysis and Applications 91 (1983) 552–558.[24] S. Song, C. Wu, Existence and uniqueness of solutions to Cauchy problem of fuzzy differential equations, Fuzzy Sets and Systems 110 (2000) 55–67.[25] L. Stefaninia, L. Sorinia, M.L. Guerraa, Parametric representation of fuzzy numbers and application to fuzzy calculus, Fuzzy Sets and Systems 157 (2006)
2423–2455.[26] C. Wu, S. Song, Existence theorem to the Cauchy problem of fuzzy differential equations under compactness-type conditions, Information Sciences 108
(1998) 123–134.[27] L.A. Zadeh, Fuzzy sets, Information and Control 8 (1965) 338–353.[28] L.A. Zadeh, Toward a generalized theory of uncertainty (GTU) an outline, Information Sciences 172 (2005) 140.