12
Hindawi Publishing Corporation Abstract and Applied Analysis Volume 2013, Article ID 212469, 11 pages http://dx.doi.org/10.1155/2013/212469 Research Article Stability Analysis of Stochastic Markovian Jump Neural Networks with Different Time Scales and Randomly Occurred Nonlinearities Based on Delay-Partitioning Projection Approach Jianmin Duan, 1 Manfeng Hu, 1,2 Yongqing Yang, 1,2 and Liuxiao Guo 1 1 School of Science, Jiangnan University, Wuxi 214122, China 2 Key Laboratory of Advanced Process Control for Light Industry (Jiangnan University), Ministry of Education, Wuxi 214122, China Correspondence should be addressed to Manfeng Hu; [email protected] Received 27 June 2013; Accepted 2 October 2013 Academic Editor: Debora Amadori Copyright © 2013 Jianmin Duan et al. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. In this paper, the mean square asymptotic stability of stochastic Markovian jump neural networks with different time scales and randomly occurred nonlinearities is investigated. In terms of linear matrix inequality (LMI) approach and delay-partitioning projection technique, delay-dependent stability criteria are derived for the considered neural networks for cases with or without the information of the delay rates via new Lyapunov-Krasovskii functionals. We also obtain that the thinner the delay is partitioned, the more obviously the conservatism can be reduced. An example with simulation results is given to show the effectiveness of the proposed approach. 1. Introduction e human brain is made up of a large amount of neurons and their interconnections. An artificial neural network is an information processing system that has certain charac- teristics in common with biological neural networks. During the past decades, neural networks have been used for a wide variety of applications, for example, associative memories, pattern recognition, signal processing, and the other fields [14]. As is well known, the stability of neural networks plays an important role in modern control theories for these applications. However, time delays are oſten attributed as the major sources of instability in various engineering systems. erefore, how to find sufficient conditions to guarantee the stability of neural networks with time delays is an important research topic [515]. Markovian jump system is an important class of stochas- tic models, which can be described by a set of nonlinear systems with the transitions between models determined by a Markovian chain in a finite mode set. is kind of system has been extensively applied to study the stability of neural networks. In real life, neural networks have a phenomenon of information latching, and it is recognized that the best way for modeling this class of neural networks is Markovian jump system [1618]. Obviously, the Markovian jump system is more complex and challenging than the system without Markovian jump parameters, in which many authors are interested [12, 1924]. On the other hand, it should be pointed out that lots of practical systems are influenced by additive randomly occurred nonlinear disturbances which are caused by envi- ronmental circumstances. In today’s networked environment, such nonlinear disturbances may be subject to random abrupt changes, which may result from abrupt phenomena, such as random failures and repairs of the components, environmental disturbance, and so forth. In other words, the nonlinear disturbances may occur in a probabilistic way, but they are randomly changeable in terms of their types and/or intensity. e stochastic nonlinearities, which are then named as randomly occurred nonlinearities (RONs), have recently attracted much attention [2528].

Research Article Stability Analysis of Stochastic ...downloads.hindawi.com/journals/aaa/2013/212469.pdf · major sources of instability in various engineering systems. erefore, how

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Research Article Stability Analysis of Stochastic ...downloads.hindawi.com/journals/aaa/2013/212469.pdf · major sources of instability in various engineering systems. erefore, how

Hindawi Publishing CorporationAbstract and Applied AnalysisVolume 2013, Article ID 212469, 11 pageshttp://dx.doi.org/10.1155/2013/212469

Research ArticleStability Analysis of Stochastic Markovian JumpNeural Networks with Different Time Scales andRandomly Occurred Nonlinearities Based onDelay-Partitioning Projection Approach

Jianmin Duan,1 Manfeng Hu,1,2 Yongqing Yang,1,2 and Liuxiao Guo1

1 School of Science, Jiangnan University, Wuxi 214122, China2 Key Laboratory of Advanced Process Control for Light Industry (Jiangnan University), Ministry of Education, Wuxi 214122, China

Correspondence should be addressed to Manfeng Hu; [email protected]

Received 27 June 2013; Accepted 2 October 2013

Academic Editor: Debora Amadori

Copyright © 2013 Jianmin Duan et al. This is an open access article distributed under the Creative Commons Attribution License,which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

In this paper, the mean square asymptotic stability of stochastic Markovian jump neural networks with different time scales andrandomly occurred nonlinearities is investigated. In terms of linear matrix inequality (LMI) approach and delay-partitioningprojection technique, delay-dependent stability criteria are derived for the considered neural networks for cases with or without theinformation of the delay rates via new Lyapunov-Krasovskii functionals. We also obtain that the thinner the delay is partitioned,the more obviously the conservatism can be reduced. An example with simulation results is given to show the effectiveness of theproposed approach.

1. IntroductionThe human brain is made up of a large amount of neuronsand their interconnections. An artificial neural network isan information processing system that has certain charac-teristics in common with biological neural networks. Duringthe past decades, neural networks have been used for a widevariety of applications, for example, associative memories,pattern recognition, signal processing, and the other fields[1–4]. As is well known, the stability of neural networksplays an important role in modern control theories for theseapplications. However, time delays are often attributed as themajor sources of instability in various engineering systems.Therefore, how to find sufficient conditions to guarantee thestability of neural networks with time delays is an importantresearch topic [5–15].

Markovian jump system is an important class of stochas-tic models, which can be described by a set of nonlinearsystems with the transitions between models determined bya Markovian chain in a finite mode set. This kind of systemhas been extensively applied to study the stability of neural

networks. In real life, neural networks have a phenomenonof information latching, and it is recognized that the bestway for modeling this class of neural networks is Markovianjump system [16–18]. Obviously, the Markovian jump systemis more complex and challenging than the system withoutMarkovian jump parameters, in which many authors areinterested [12, 19–24].

On the other hand, it should be pointed out that lotsof practical systems are influenced by additive randomlyoccurred nonlinear disturbances which are caused by envi-ronmental circumstances. In today’s networked environment,such nonlinear disturbances may be subject to randomabrupt changes, which may result from abrupt phenomena,such as random failures and repairs of the components,environmental disturbance, and so forth. In other words, thenonlinear disturbances may occur in a probabilistic way, butthey are randomly changeable in terms of their types and/orintensity.The stochastic nonlinearities, which are then namedas randomly occurred nonlinearities (RONs), have recentlyattracted much attention [25–28].

Page 2: Research Article Stability Analysis of Stochastic ...downloads.hindawi.com/journals/aaa/2013/212469.pdf · major sources of instability in various engineering systems. erefore, how

2 Abstract and Applied Analysis

Recently, by introducing free-weightingmatrices [29, 30],model transformation method [31], linear matrix inequality(LMI) approach [32, 33], and adopting the concept of delaypartitioning [9, 11, 34], stability criteria have been obtainedfor some neural networks. Sufficient conditions for the robuststability of uncertain stochastic system with interval time-varying delay were derived in [29] by employing the delaypartitioning approach. Delay-dependent conditions onmeansquare asymptotic stability of stochastic neural networks withMarkovian jumping parameters are presented by using thedelay partitioningmethodwhich is different from the existingones in the literature and convex combination method in[12]. The RONs model and the sensor failure model wereintroduced in [26]. In [9], better delay-dependent stabilitycriteria for continuous systems with multiple delay compo-nents were established by utilizing a delay-partitioning pro-jection approach. On the other hand, it is worth mentioningthat neural networks on time scales have been presentedand studied [35–37], which can unify the continuous anddiscrete situations. To the best of the authors’ knowledge, thedelay-partitioning projection approach to stability analysis ofstochastic Markovian jump neural networks with differenttime scales and randomly occurred nonlinearities has neverbeen tackled in the previous literature. This motivates ourresearch.

In this paper, the problem of stability analysis of stochas-tic Markovian jump neural networks with different timescales and RONs is considered. The paper is organised asfollows. Section 2 introduces model description and prelim-inaries. RONs are introduced to model a class of sector-likenonlinearities whose occurrence is governed by a Bernoullidistributed white sequence with a known conditional prob-ability. In Section 3, we derive the stability results based ondelay-partitioning projection approach for stochasticMarko-vian jump neural networks with RONs. The results includetwo cases, one with a specified delay rates and the other witharbitrary delay rates. In addition to delay dependence, theobtained conditions are also dependent on the partitioningsize; we verify that the conservatism of the conditions isa nonincreasing function of the number of partitions. Anumerical example is presented to illustrate the effectivenessof the obtained criteria in Section 4. And finally, conclusionsare drawn in Section 5.

Notations. Throughout this paper, the notation is fairlystandard. 𝑅𝑛 denotes the 𝑛-dimensional Euclidean space.𝑅

𝑛×𝑚 stands for realmatrix𝑅 of size 𝑛×𝑚 (simply abbreviated𝑅

𝑛when 𝑚 = 𝑛). 𝑃 > (<)0 is used to define a real symme-

tric positive definite (negative definite) matrix. For realsymmetric matrices 𝑋 and 𝑌, the notation 𝑋 ≥ 𝑌 (resp.,𝑋 > 𝑌) means that the matrix 𝑋 − 𝑌 is positive semidefinite(resp., positive definite).The symmetric terms in a symmetricmatrix are denoted by ∗ and diag{⋅ ⋅ ⋅ } denotes a block-diagonal matrix. The superscripts 𝐴𝑇 and 𝐴−1 stand forthe transpose and inverse of matrix 𝐴. (Ω,F, {F

𝑡}

𝑡≥0,P)

denotes a complete probability space with a filtration {F𝑡}

𝑡≥0,

whereΩ is a sample space,F is the 𝜎-algebra of subset of thesample space, and P is the probability measure on F. 𝐸{𝑥}stands for the expectation of the stochastic variable 𝑥.

2. Model Description and Preliminaries

In this paper, the stochasticMarkovian jump neural networkswith different time scales RONs are considered:

𝜀�� (𝑡) = −𝐴 (𝑟 (𝑡)) 𝑥 (𝑡) + 𝐵 (𝑟 (𝑡)) 𝑥 (𝑡 − 𝜏 (𝑡))

+ 𝜉 (𝑡) 𝐸𝑓 (𝑥 (𝑡))

+ [𝐶 (𝑟 (𝑡)) 𝑥 (𝑡) + 𝐷 (𝑟 (𝑡)) 𝑥 (𝑡 − 𝜏 (𝑡))]𝑊 (𝑡) ,

𝑥 (𝑡) = 𝜙 (𝑡) , ∀𝑡 ∈ [−𝑑, 0] ,

(1)

where 𝑥(𝑡) ∈ 𝑅𝑛 is the state vector.𝑊(𝑡) is a scalar zero meanGaussian white noise process. 𝜙(𝑡) is a real-valued initialcondition. 𝜀 > 0 is the time scale. {𝑟(𝑡)} is a right-conti-nuous Markov chain on the probability space (Ω,F,P)taking values in a given finite set 𝑆 = {1, 2, . . . , 𝑁}. The tran-sition probability matrix of system (1) is given by

𝑃 {𝑟 (𝑡 + Δ𝑡) = 𝑗 | 𝑟 (𝑡) = 𝑖} = {𝑞

𝑖𝑗Δ𝑡 + 𝑜 (Δ𝑡) if 𝑖 = 𝑗,

1 + 𝑞𝑖𝑖Δ𝑡 + 𝑜 (Δ𝑡) , if 𝑖 = 𝑗,

(2)

where Δ𝑡 > 0 and limΔ𝑡 → 0

𝑜(Δ𝑡)/Δ𝑡 = 0, 𝑞𝑖𝑗≥ 0 is the transi-

tion rate from mode 𝑖 at time 𝑡 to mode 𝑗 at time 𝑡 + Δ𝑡, and𝑞

𝑖𝑖= −∑

𝑖 = 𝑗𝑞

𝑖𝑗. 𝐴(𝑟(𝑡)) = diag{𝑎

𝑖1, 𝑎

𝑖2, . . . , 𝑎

𝑖𝑛} is a posi-

tive diagonal matrix. 𝐵(𝑟(𝑡)), 𝐶(𝑟(𝑡)), 𝐷(𝑟(𝑡)), and 𝐸 areknownmatrices. For the sake of simplicity, in the sequel, eachpossible value of 𝑟(𝑡) is denoted by 𝑖, 𝑖 ∈ 𝑆 and𝐴(𝑟(𝑡)),𝐵(𝑟(𝑡)),𝐶(𝑟(𝑡)),𝐷(𝑟(𝑡)) are abbreviated as 𝐴

𝑖, 𝐵

𝑖, 𝐶

𝑖,𝐷

𝑖, respectively.

𝑑(𝑡) denotes the time-varying delay satisfying

0 ≤ 𝑑 (𝑡) ≤ 𝑑, 𝑑 (𝑡) ≤ 𝜇, (3)

where 𝑑 and 𝜇 are constant real numbers.Finally, 𝑓(𝑥(𝑡)) stands for the mismatched external non-

linearity. The stochastic variable 𝜉(𝑡) ∈ 𝑅, which accountsfor the phenomena of RONs, is a Bernoulli distributed whitenoise sequence specified by the following distribution laws:

Prob {𝜉 (𝑡) = 1} = 𝐸 {𝜉 (𝑡)} = 𝜉,

Prob {𝜉 (𝑡) = 0} = 1 − 𝐸 {𝜉 (𝑡)} = 1 − 𝜉.(4)

Furthermore, we can show that

𝐸 {𝜉 (𝑡) − 𝜉} = 0, 𝐸 {(𝜉 (𝑡) − 𝜉)2

} = 𝜉 (1 − 𝜉) , (5)

where 𝜉 ∈ [0, 1] is a constant.Before proceeding, we make the following assumption.

Assumption 1. For all 𝑥 ∈ 𝑅𝑛, the nonlinear function 𝑓(𝑥) isassumed to satisfy the following sector-bounded condition:

[𝑓(𝑥) − 𝐾1𝑥]

𝑇

[𝑓 (𝑥) − 𝐾2𝑥] ≤ 0, (6)

where 𝐾1and 𝐾

2are known real matrices of appropriate

dimensions and𝐾1− 𝐾

2≥ 0.

Page 3: Research Article Stability Analysis of Stochastic ...downloads.hindawi.com/journals/aaa/2013/212469.pdf · major sources of instability in various engineering systems. erefore, how

Abstract and Applied Analysis 3

Remark 2. The nonlinear function 𝑓(𝑥) satisfying (6) iscustomarily said to belong to sector [𝐾

1, 𝐾

2]. Because such

a nonlinear condition is quite general which includes theusual Lipschitz condition as a special case, the systems withsector-bounded nonlinearities have been intensively studiedin [25, 26].

Recall that the time derivative of a Wiener process isa white noise [27, 28]. We establish 𝑑𝑤(𝑡) = 𝑊(𝑡)𝑑𝑡,where 𝑤(𝑡) is a scalar Wiener process on a probability space(Ω,F, {F

𝑡}

𝑡≥0,P), which is independent from the Markov

chain {𝑟(𝑡), 𝑡 ≥ 0}. It is further assumed that 𝑤(𝑡) and thestochastic variable 𝜉(𝑡) are mutually independent. Besides,𝑤(𝑡) satisfies

𝐸 {𝑑𝑤 (𝑡)} = 0, 𝐸 {𝑑𝑤2

(𝑡)} = 𝑑𝑡. (7)

Hence, the network (1) is rewritten as the following stochasticdifferential equations:

𝑑𝑥 (𝑡) = 𝜀−1

[ − 𝐴 (𝑟 (𝑡)) 𝑥 (𝑡) + 𝐵 (𝑟 (𝑡)) 𝑥 (𝑡 − 𝜏 (𝑡))

+ 𝜉 (𝑡) 𝐸𝑓 (𝑥 (𝑡))] 𝑑𝑡

+ 𝜀−1

[𝐶 (𝑟 (𝑡)) 𝑥 (𝑡) + 𝐷 (𝑟 (𝑡)) 𝑥 (𝑡 − 𝜏 (𝑡))] 𝑑𝑤 (𝑡) ,

𝑥 (𝑡) = 𝜙 (𝑡) , ∀𝑡 ∈ [−𝑑, 0] .

(8)

Remark 3. The stochastic Markovian neural network withdifferent time scales and RONs (8) is general enough toinclude many models as its special cases. For example, ifwe do not consider the RONs, time scales and remove thenoise perturbations, then (8) is reduced to those studied in[9, 23, 38–41], whereas the authors in [9, 38–41] ignore theMarkovian jump parameters. If we do not consider the RONsand time scales, then (8) is discussed in [29] and its references.If we do not take the noise perturbations and time scales intoaccount and let Prob{𝜉(𝑡) = 1} = 1, 𝐵(𝑟(𝑡)) = 0, (8) is just theone in [42]. If we do not consider the RONs, the Markovianjump parameters, and the noise perturbations, then (8) isstudied in [37]. Furthermore, if we remove the time scales, (8)is the one in [43]. Thus, our model generalizes and improvesgreatly many previous works and is therefore very significant.

Remark 4. In [44], the delay-partitioning approach wasintroduced to study the discrete-time recurrent neural net-works. As an extension of the approach, we use two differentpartitions to deal with the continuous Markovian jumpneural networks with different time scales in this paper.

Note that it is easy to prove that there exists at leastone equilibrium point for (8) by employing the well-knownBrouwer’s fixed point theorem. To end this section, thelemmas which are necessary for the proof of our main resultsare introduced as follows.

Lemma 5. For any constant matrix 𝑅 > 0, any scalars 𝑎 and𝑏 with 𝑎 < 𝑏, and a vector function 𝑥(𝑡) : [𝑎, 𝑏] → 𝑅

𝑛 such

that the integrations concerned are well defined, the followinginequality holds:

[∫

𝑏

𝑎

𝑥(𝑠)𝑑𝑠]

𝑇

𝑅[∫

𝑏

𝑎

𝑥 (𝑠) 𝑑𝑠] ≤ (𝑏 − 𝑎) ∫

𝑏

𝑎

𝑥𝑇

(𝑠) 𝑅𝑥 (𝑠) 𝑑𝑠.

(9)

Lemma 6 (see [9]). Let 𝑍 ∈ 𝑅𝑛×𝑛 and the bidiagonal uppertriangular block matrix

𝐽𝐾(𝑍) = (

𝐼𝑛−𝑍 0

d dd −𝑍

0 𝐼𝑛

) ∈ 𝑅𝐾𝑛×𝐾𝑛 . (10)

If 𝑌 = (𝐽𝐾(𝑍)𝐹) ∈ 𝑅

𝐾𝑛×(𝐾𝑛+𝑚) with 𝐹 = (

𝐹1

...𝐹𝑘

) ∈ 𝑅𝐾𝑛×𝑚,

𝐹𝑗∈ 𝑅

𝑛×𝑚

(𝑗 = 1, . . . , 𝐾), then

𝑌⊥

= col{

{

{

𝐾

𝑗=1

𝑍𝑗−1

𝐹𝑗, −

𝐾

𝑗=2

𝑍𝑗−2

𝐹𝑗, . . . , −𝐹

𝐾, 𝐼

𝑚

}

}

}

. (11)

Lemma 7 (Finsler’s lemma). Suppose that 𝑥 ∈ 𝑅𝑛,𝑀 = 𝑀𝑇

𝑅𝑛×𝑛 and 𝐵 ∈ 𝑅𝑚×𝑛 such that 𝐵 has full row rank. Then, the

following statements are equivalent:

(1) there exists a vector 𝑥 ∈ 𝑅𝑛 such that 𝑥𝑇

𝑀𝑥 < 0 and𝐵𝑥 = 0,

(2) there exists a scalar 𝜇 ∈ 𝑅 such that 𝜇𝐵𝑇

𝐵 −𝑀 > 0,

(3) ∃𝑋 ∈ 𝑅𝑛×𝑚 such that𝑀+𝑋𝐵 + 𝐵𝑇

𝑋𝑇

< 0,(4) 𝐵⊥𝑇

𝑀𝐵⊥

< 0, where 𝐵⊥ is the orthogonal complementof 𝐵.

Remark 8. It should be pointed out that various problemsin control theory have been solved by combing Lyapunovcontrol approach with Finsler’s lemma. In lots of applica-tions, Finsler’s lemma is referred to Elimination lemma,which devotes to eliminate the redundant variables in matrixinequalities [45].

3. Main Results

In this section, we will give the mean square asymptoticstability for the system (8) in terms of LMI approach. Themain results are stated as follows.

Theorem 9. For given constants 𝑑 and 𝜇, and two positiveintegers 𝑚 and 𝑀, the stochastic system (8) is mean squareasymptotically stable if there exist matrices 𝑃

𝑖> 0,𝑄

𝑘> 0 (𝑘 =

1, . . . , 𝑚), 𝑅 > 0,𝑋1> 0,𝑋

2> 0 and

𝑊 = 𝑊𝑇

= (

𝑊11⋅ ⋅ ⋅ 𝑊

1𝑀

... d...

𝑊𝑇

1𝑀⋅ ⋅ ⋅ 𝑊

𝑀𝑀

) > 0, (12)

Page 4: Research Article Stability Analysis of Stochastic ...downloads.hindawi.com/journals/aaa/2013/212469.pdf · major sources of instability in various engineering systems. erefore, how

4 Abstract and Applied Analysis

and positive scalar 𝑙 > 0 such that the following LMI holds:

𝐵⊥𝑇

1+ Ξ

2+ Ξ

30

∗ Ξ4

)𝐵⊥

< 0, (13)

where 𝐵⊥

∈ 𝑅(2𝑚+2𝑀+3)𝑛×(𝑚+𝑀+3)𝑛 is all arbitrary but fixed

matrix whose columns form a basis of the kernel space of𝐵

(𝑚+𝑀)𝑛×(2𝑚+2𝑀+3)𝑛= (𝐽

𝑚+𝑀(𝐼

𝑛)𝐹),

𝐹 = (

0 0 0 −𝐼𝑛0 ⋅ ⋅ ⋅ 0

......

... d d...

0 0 0 d d 0

−𝐼𝑛0 0 0 ⋅ ⋅ ⋅ 0 −𝐼

𝑛

) ∈ 𝑅(𝑚+𝑀)𝑛×(𝑚+𝑀+3)𝑛

,

Ξ1=

((((((((((

(

Ω10 ⋅ ⋅ ⋅ 𝜀

−1

𝑃𝑖𝐵

𝑖0 𝑊

1𝑀⋅ ⋅ ⋅ 𝑊

12Δ

1

∗ Ω2⋅ ⋅ ⋅ 0 0 0 ⋅ ⋅ ⋅ 0 0

...... d

......

... d...

...∗ ∗ . . . Ω

𝑚+10 0 ⋅ ⋅ ⋅ 0 0

∗ ∗ . . . ∗ Γ𝑀+1

−𝑊𝑇

(𝑀−1)𝑀⋅ ⋅ ⋅ −𝑊

𝑇

1𝑀0

∗ ∗ . . . ∗ ∗ Γ𝑀

⋅ ⋅ ⋅ 𝑊𝑇

2𝑀−𝑊

𝑇

1(𝑀−1)0

...... d

......

... d...

...∗ ∗ . . . ∗ ∗ ∗ ⋅ ⋅ ⋅ Γ

20

∗ ∗ . . . ∗ ∗ ∗ ⋅ ⋅ ⋅ ∗ Δ2

))))))))))

)

,

Ω1= −2𝜀

−1

𝑃𝑖𝐴

𝑖+

𝑁

𝑗=1

𝑞𝑖𝑗𝑃

𝑗+𝑊

11+

𝑚

𝑖=1

𝑄𝑖− 𝑙��

1,

Δ1= −𝑙��

2+ 𝜉𝜀

−1

𝑃𝑖𝐸, Δ

2= −𝑙𝐼 + 𝑅 + 𝐸

𝑇

(𝜀−2

(𝜉 − 𝜉2

) ℎ (𝑑 − ℎ)𝑋1+ 𝜀

−2

(𝜉 − 𝜉2

) 𝑑𝜎𝑋2)𝐸,

Ω𝑘= ((𝑘 − 1) 𝜇

𝑚 − 1)𝑄

𝑘−1, (𝑘 = 2, . . . , 𝑚 + 1) ,

Γ𝑘= 𝑊

𝑘𝑘−𝑊

(𝑘−1)(𝑘−1), 𝑊

(𝑀+1)(𝑀+1)= 0, (𝑘 = 2, . . . ,𝑀 + 1) ,

Ξ2= (−𝐴

𝑖0 ⋅ ⋅ ⋅ 0 𝐵

𝑖0 ⋅ ⋅ ⋅ 0 𝜉𝐸)

𝑇

(𝜀−2

ℎ (𝑑 − ℎ)𝑋1+ 𝜀

−2

𝑑𝜎𝑋2) (−𝐴

𝑖0 ⋅ ⋅ ⋅ 0 𝐵

𝑖0 ⋅ ⋅ ⋅ 0 𝜉𝐸) ,

Ξ3= (𝐶

𝑖0 ⋅ ⋅ ⋅ 0 𝐷

𝑖0 ⋅ ⋅ ⋅ 0)

𝑇

(𝜀−2

(𝑑 − ℎ)𝑋1+ 𝜀

−2

𝑑𝑋2+ 𝜀

−2

𝑃𝑖) (𝐶

𝑖0 ⋅ ⋅ ⋅ 0 𝐷

𝑖0 ⋅ ⋅ ⋅ 0) ,

Ξ4= diag {(𝜇 − 1) 𝑅, −𝑋

2, . . . , −𝑋

2, −𝑚

−1

𝑋2, −𝑋

1, . . . , −𝑋

1} ,

��1=

(𝐾𝑇

1𝐾

2+ 𝐾

𝑇

2𝐾

1)

2, ��

2= −

(𝐾𝑇

1+ 𝐾

𝑇

2)

2.

(14)

Proof. For presentation convenience, in the following, wedenote

𝑦 (𝑡) = −𝐴 (𝑟 (𝑡)) 𝑥 (𝑡) + 𝐵 (𝑟 (𝑡)) 𝑥 (𝑡 − 𝑑 (𝑡))

+ 𝜉 (𝑡) 𝐸𝑓 (𝑥 (𝑡)) ,

𝑚 (𝑡) = 𝐶 (𝑟 (𝑡)) 𝑥 (𝑡) + 𝐷 (𝑟 (𝑡)) 𝑥 (𝑡 − 𝑑 (𝑡)) ,

(15)

then, the system (8) becomes

𝑑𝑥 (𝑡) = 𝑦 (𝑡) 𝑑𝑡 + 𝑚 (𝑡) 𝑑𝑤 (𝑡) . (16)

By employing the idea of delay partitioning approach tothe time-delay 𝑑(𝑡), we choose the following Lyapunov-Krasovskii functional candidate:

𝑉 (𝑥 (𝑡) , 𝑡, 𝑖) = 𝑥𝑇

(𝑡) 𝑃𝑖𝑥 (𝑡)

+

𝑚

𝑘=1

𝑡

𝑡−𝛼𝑘(𝑡)

𝑥𝑇

(𝑠) 𝑄𝑘𝑥 (𝑠) 𝑑𝑠

+ ∫

𝑡

𝑡−ℎ

𝛾𝑇

(𝑠)𝑊𝛾 (𝑠) 𝑑𝑠

+ ∫

𝑡

𝑡−𝑑(𝑡)

𝑓𝑇

(𝑥 (𝑠)) 𝑅𝑓 (𝑥 (𝑠)) 𝑑𝑠

Page 5: Research Article Stability Analysis of Stochastic ...downloads.hindawi.com/journals/aaa/2013/212469.pdf · major sources of instability in various engineering systems. erefore, how

Abstract and Applied Analysis 5

+ ℎ∫

−ℎ

−𝑑

𝑡

𝑡+𝜃

𝑦𝑇

(𝑠)𝑋1𝑦 (𝑠) 𝑑𝑠 𝑑𝜃

+ ∫

−ℎ

−𝑑

𝑡

𝑡+𝜃

𝑚𝑇

(𝑠)𝑋1𝑚(𝑠) 𝑑𝑠 𝑑𝜃

+ 𝜎∫

0

−𝑑

𝑡

𝑡+𝜃

𝑦𝑇

(𝑠)𝑋2𝑦 (𝑠) 𝑑𝑠 𝑑𝜃

+ ∫

0

−𝑑

𝑡

𝑡+𝜃

𝑚𝑇

(𝑠)𝑋2𝑚(𝑠) 𝑑𝑠 𝑑𝜃,

(17)

where

𝛾𝑇

(𝑡) = (𝑥𝑇

(𝑡) 𝑥𝑇

(𝑡 − ℎ) ⋅ ⋅ ⋅ 𝑥𝑇

(𝑡 − (𝑀 − 1) ℎ)) ,

𝜎 (𝑡) =𝑑 (𝑡)

𝑚, ℎ =

𝑑

𝑀, 𝜎 =

𝑑

𝑚,

𝛼𝑘(𝑡) = 𝑘𝜎 (𝑡) , (𝑘 = 1, . . . , 𝑚) .

(18)

Let L be the weak infinitesimal generator of the randomprocess {𝑥(𝑡), 𝑟(𝑡), 𝑡 ≥ 0} along the trajectory of the system(8). Then, by Ito differential formula, we have

L𝑉 (𝑥 (𝑡) , 𝑡, 𝑖)

= 2𝑥𝑇

(𝑡) 𝑃𝑖𝑦 (𝑡) + 𝑥

𝑇

(𝑡)

𝑁

𝑗=1

𝑞𝑖𝑗𝑃

𝑗𝑥 (𝑡) + 𝑚

𝑇

(𝑡) 𝑃𝑖𝑚(𝑡)

+

𝑚

𝑖=1

𝑥𝑇

(𝑡) 𝑄𝑖𝑥 (𝑡)

𝑚

𝑖=1

(1 − ��𝑖(𝑡)) 𝑥

𝑇

(𝑡 − 𝛼𝑖(𝑡)) 𝑄

𝑖𝑥 (𝑡 − 𝛼

𝑖(𝑡))

+ 𝛾𝑇

(𝑡)𝑊𝛾 (𝑡) − 𝛾𝑇

(𝑡 − ℎ)𝑊𝛾 (𝑡 − ℎ)

+ 𝑓𝑇

(𝑥 (𝑡)) 𝑅𝑓 (𝑥 (𝑡))

− (1 − 𝑑 (𝑡)) 𝑓𝑇

(𝑥 (𝑡 − 𝑑 (𝑡))) 𝑅𝑓 (𝑥 (𝑡 − 𝑑 (𝑡)))

+ ℎ (𝑑 − ℎ) 𝑦𝑇

(𝑡) 𝑋1𝑦 (𝑡)

− ℎ

𝑀

𝑘=2

𝑡−(𝑘−1)ℎ

𝑡−𝑘ℎ

𝑦𝑇

(𝑠)𝑋1𝑦 (𝑠) 𝑑𝑠

+ (𝑑 − ℎ)𝑚𝑇

(𝑡) 𝑋1𝑚(𝑡)

𝑀

𝑘=2

𝑡−(𝑘−1)ℎ

𝑡−𝑘ℎ

𝑚𝑇

(𝑠)𝑋1𝑚(𝑠) 𝑑𝑠

+ 𝜎𝑑𝑦𝑇

(𝑡) 𝑋2𝑦 (𝑡)

− 𝜎

𝑚

𝑖=1

𝑡−𝛼𝑖−1

(𝑡)

𝑡−𝛼𝑖(𝑡)

𝑦𝑇

(𝑠)𝑋2𝑦 (𝑠) 𝑑𝑠

− 𝜎∫

𝑡−𝑑(𝑡)

𝑡−𝑑

𝑦𝑇

(𝑠)𝑋2𝑦 (𝑠) 𝑑𝑠 + 𝑑𝑚

𝑇

(𝑡) 𝑋2𝑚(𝑡)

𝑚

𝑖=1

𝑡−𝛼𝑖−1

(𝑡)

𝑡−𝛼𝑖(𝑡)

𝑚𝑇

(𝑠)𝑋2𝑚(𝑠) 𝑑𝑠

− ∫

𝑡−𝑑(𝑡)

𝑡−𝑑

𝑚𝑇

(𝑠)𝑋2𝑚(𝑠) 𝑑𝑠.

(19)

By virtue of (3), (7), and (9), it can be verified that

𝐸 {L𝑉 (𝑥 (𝑡) , 𝑡, 𝑖)}

≤ 𝐸{

{

{

2𝑥𝑇

(𝑡) 𝑃𝑖𝑦 (𝑡) + 𝑥

𝑇

(𝑡)

𝑁

𝑗=1

𝑞𝑖𝑗𝑃

𝑗𝑥 (𝑡) + 𝑚

𝑇

(𝑡) 𝑃𝑖𝑚(𝑡)

𝑚

𝑖=1

(1 −𝑖

𝑚𝜇)𝑥

𝑇

(𝑡 − 𝛼𝑖(𝑡)) 𝑄

𝑖𝑥 (𝑡 − 𝛼

𝑖(𝑡))

+

𝑚

𝑖=1

𝑥𝑇

(𝑡) 𝑄𝑖𝑥 (𝑡) + 𝛾

𝑇

(𝑡)𝑊𝛾 (𝑡)

− 𝛾𝑇

(𝑡 − ℎ)𝑊𝛾 (𝑡 − ℎ) + 𝑓𝑇

(𝑥 (𝑡)) 𝑅𝑓 (𝑥 (𝑡))

− (1 − 𝜇) 𝑓𝑇

(𝑥 (𝑡 − 𝑑 (𝑡)))

× 𝑅𝑓 (𝑥 (𝑡 − 𝑑 (𝑡)))

+ ℎ (𝑑 − ℎ) 𝑦𝑇

(𝑡) 𝑋1𝑦 (𝑡)

+ (𝑑 − ℎ)𝑚𝑇

(𝑡) 𝑋1𝑚(𝑡)

+ 𝜎𝑑𝑦𝑇

(𝑡) 𝑋2𝑦 (𝑡) + 𝑑𝑚

𝑇

(𝑡) 𝑋2𝑚(𝑡)

𝑀

𝑘=2

(∫

𝑡−(𝑘−1)ℎ

𝑡−𝑘ℎ

𝑦 (𝑠) 𝑑𝑠)

𝑇

𝑋1

× (∫

𝑡−(𝑘−1)ℎ

𝑡−𝑘ℎ

𝑦 (𝑠) 𝑑𝑠)

𝑀

𝑘=2

(∫

𝑡−(𝑘−1)ℎ

𝑡−𝑘ℎ

𝑚(𝑠) 𝑑𝜔 (𝑠))

𝑇

𝑋1

× (∫

𝑡−(𝑘−1)ℎ

𝑡−𝑘ℎ

𝑚(𝑠) 𝑑𝜔 (𝑠))

𝑚

𝑖=1

(∫

𝑡−𝛼𝑖−1

(𝑡)

𝑡−𝛼𝑖(𝑡)

𝑦 (𝑠) 𝑑𝑠)

𝑇

𝑋2

Page 6: Research Article Stability Analysis of Stochastic ...downloads.hindawi.com/journals/aaa/2013/212469.pdf · major sources of instability in various engineering systems. erefore, how

6 Abstract and Applied Analysis

× (∫

𝑡−𝛼𝑖−1

(𝑡)

𝑡−𝛼𝑖(𝑡)

𝑦 (𝑠) 𝑑𝑠)

−1

𝑚(∫

𝑡−𝑑(𝑡)

𝑡−𝑑

𝑦 (𝑠) 𝑑𝑠)

𝑇

𝑋2

× (∫

𝑡−𝑑(𝑡)

𝑡−𝑑

𝑦 (𝑠) 𝑑𝑠)

𝑚

𝑖=1

(∫

𝑡−𝛼𝑖−1

(𝑡)

𝑡−𝛼𝑖(𝑡)

𝑚(𝑠) 𝑑𝜔 (𝑠))

𝑇

𝑋2

× (∫

𝑡−𝛼𝑖−1

(𝑡)

𝑡−𝛼𝑖(𝑡)

𝑚(𝑠) 𝑑𝜔 (𝑠))

−1

𝑚(∫

𝑡−𝑑(𝑡)

𝑡−𝑑

𝑚(𝑠) 𝑑𝜔 (𝑠))

𝑇

𝑋2

×(∫

𝑡−𝑑(𝑡)

𝑡−𝑑

𝑚(𝑠) 𝑑𝜔 (𝑠))} .

(20)

On the other hand, note that (6) is equivalent to

(𝑥

𝑓 (𝑥))

𝑇

(��

1��

2

∗ 𝐼)(

𝑥

𝑓 (𝑥)) ≤ 0, (21)

which implies

− 𝑙 [𝑓𝑇

(𝑥 (𝑡)) 𝑓 (𝑥 (𝑡)) + 𝑓𝑇

(𝑥 (𝑡)) ��𝑇

2𝑥 (𝑡)

+ 𝑥𝑇

(𝑡) ��2𝑓 (𝑥 (𝑡)) + 𝑥

𝑇

(𝑡) ��1𝑥 (𝑡)] ≥ 0,

(22)

where 𝑙 is a positive constant.Substituting (5), (7), (15), and (22) into (20), we can obtain

𝐸 {L𝑉 (𝑥 (𝑡) , 𝑡, 𝑖)}

≤ 𝐸{2𝑥𝑇

(𝑡) 𝑃𝑖(−𝐴

𝑖𝑥 (𝑡) + 𝐵

𝑖𝑥 (𝑡 − 𝑑 (𝑡)))

− 𝑙𝑥𝑇

(𝑡) ��1𝑥 (𝑡) + 2𝑥

𝑇

(𝑡) (𝜀−1

𝜉𝑃𝑖𝐸 − 𝑙��

2)

× 𝑓 (𝑥 (𝑡)) + 𝑥𝑇

(𝑡)

𝑁

𝑗=1

𝑞𝑖𝑗𝑃

𝑗𝑥 (𝑡)

+ ℎ𝑇

(𝑡) 𝑃𝑖ℎ (𝑡) −

𝑚

𝑖=1

(1 −𝑖

𝑚𝜇)𝑥

𝑇

× (𝑡 − 𝛼𝑖(𝑡)) 𝑄

𝑖𝑥 (𝑡 − 𝛼

𝑖(𝑡))

+

𝑚

𝑖=1

𝑥𝑇

(𝑡) 𝑄𝑖𝑥 (𝑡) + 𝛾

𝑇

(𝑡)𝑊𝛾 (𝑡)

− 𝛾𝑇

(𝑡 − ℎ)𝑊𝛾 (𝑡 − ℎ)

+ 𝑓𝑇

(𝑥 (𝑡)) (𝑅 − 𝑙𝐼) 𝑓 (𝑥 (𝑡))

+ ℎ (𝑑 − ℎ) 𝑦𝑇

(𝑡) 𝑋1𝑦 (𝑡) − (1 − 𝜇) 𝑓

𝑇

× (𝑥 (𝑡 − 𝑑 (𝑡))) 𝑅𝑓 (𝑥 (𝑡 − 𝑑 (𝑡)))

+ (𝑑 − ℎ)𝑚𝑇

(𝑡) 𝑋1𝑚(𝑡) + 𝜎𝑑𝑦

𝑇

(𝑡) 𝑋2𝑦 (𝑡)

+ 𝑑𝑚𝑇

(𝑡) 𝑋2𝑚(𝑡)

𝑀

𝑘=2

(∫

𝑡−(𝑘−1)ℎ

𝑡−𝑘ℎ

𝑦 (𝑠) 𝑑𝑠)

𝑇

𝑋1

× (∫

𝑡−(𝑘−1)ℎ

𝑡−𝑘ℎ

𝑦 (𝑠) 𝑑𝑠)

𝑀

𝑘=2

(∫

𝑡−(𝑘−1)ℎ

𝑡−𝑘ℎ

𝑚(𝑠) 𝑑𝜔 (𝑠))

𝑇

𝑋1

× (∫

𝑡−(𝑘−1)ℎ

𝑡−𝑘ℎ

𝑚(𝑠) 𝑑𝜔 (𝑠))

𝑚

𝑖=1

(∫

𝑡−𝛼𝑖−1

(𝑡)

𝑡−𝛼𝑖(𝑡)

𝑦 (𝑠) 𝑑𝑠)

𝑇

𝑋2

× (∫

𝑡−𝛼𝑖−1

(𝑡)

𝑡−𝛼𝑖(𝑡)

𝑦 (𝑠) 𝑑𝑠)

−1

𝑚(∫

𝑡−𝑑(𝑡)

𝑡−𝑑

𝑦 (𝑠) 𝑑𝑠)

𝑇

𝑋2

× (∫

𝑡−𝑑(𝑡)

𝑡−𝑑

𝑦 (𝑠) 𝑑𝑠)

𝑚

𝑖=1

(∫

𝑡−𝛼𝑖−1

(𝑡)

𝑡−𝛼𝑖(𝑡)

𝑚(𝑠) 𝑑𝜔 (𝑠))

𝑇

𝑋2

× (∫

𝑡−𝛼𝑖−1

(𝑡)

𝑡−𝛼𝑖(𝑡)

𝑚(𝑠) 𝑑𝜔 (𝑠))

−1

𝑚(∫

𝑡−𝑑(𝑡)

𝑡−𝑑

𝑚(𝑠) 𝑑𝜔 (𝑠))

𝑇

𝑋2

×(∫

𝑡−𝑑(𝑡)

𝑡−𝑑

𝑚(𝑠) 𝑑𝜔 (𝑠))}

= 𝐸{𝜉𝑇

(𝑡) (Ξ

1+ Ξ

2+ Ξ

30

∗ Ξ4

) 𝜉 (𝑡)} ,

(23)

Page 7: Research Article Stability Analysis of Stochastic ...downloads.hindawi.com/journals/aaa/2013/212469.pdf · major sources of instability in various engineering systems. erefore, how

Abstract and Applied Analysis 7

where

𝜉 (𝑡) = (𝜁𝑇

1(𝑡) 𝜁

𝑇

2(𝑡) 𝑓

𝑇

(𝑥 (𝑡)) 𝑓𝑇

(𝑥 (𝑡 − 𝑑 (𝑡))) 𝜁𝑇

(𝑡))

𝑇

,

𝜁1(𝑡) = (𝑥

𝑇

(𝑡) 𝑥𝑇

(𝑡 − 𝛼1

(𝑡)) ⋅ ⋅ ⋅ 𝑥𝑇

(𝑡 − 𝛼𝑚

(𝑡)))𝑇

,

𝜁2(𝑡) = (𝑥

𝑇

(𝑡 − 𝑑) 𝑥𝑇

(𝑡 − (𝑀 − 1) ℎ) ⋅ ⋅ ⋅ 𝑥𝑇

(𝑡 − ℎ))𝑇

,

𝜁 (𝑡)

= (𝛿1

⋅ ⋅ ⋅ 𝛿𝑚

𝑡−𝑑(𝑡)

𝑡−𝑑

𝑦𝑇

(𝑠) 𝑑𝑠 + ∫

𝑡−𝑑(𝑡)

𝑡−𝑑

𝑚𝑇

(𝑠) 𝑑𝜔 (𝑠) 𝜃𝑀−1

𝜃1)

𝑇

,

𝛿𝑘

= ∫

𝑡−𝛼𝑘−1

(𝑡)

𝑡−𝛼𝑘

(𝑡)𝑦

𝑇

(𝑠) 𝑑𝑠 + ∫

𝑡−𝛼𝑘−1

(𝑡)

𝑡−𝛼𝑘

(𝑡)

𝑚𝑇

(𝑠) 𝑑𝜔 (𝑠) ,

(𝑘 = 1, . . . , 𝑚) ,

𝜃𝑘

= ∫

𝑡−𝑘ℎ

𝑡−(𝑘+1)ℎ𝑦

𝑇

(𝑠) 𝑑𝑠 + ∫

𝑡−𝑘ℎ

𝑡−(𝑘+1)ℎ

𝑚𝑇

(𝑠) 𝑑𝜔 (𝑠) ,

(𝑘 = 1, . . . , 𝑀 − 1) .

(24)

In addition, it follows from the Newton-Leibniz formula that

𝑥 (𝑡) − 𝑥 (𝑡 − 𝛼1(𝑡)) − ∫

𝑡

𝑡−𝛼1(𝑡)

𝑦 (𝑠) 𝑑𝑠

− ∫

𝑡

𝑡−𝛼1(𝑡)

𝑚(𝑠) 𝑑𝜔 (𝑠) = 0,

...

𝑥 (𝑡 − 𝛼𝑚−1(𝑡)) − 𝑥 (𝑡 − 𝛼

𝑚(𝑡)) − ∫

𝑡−𝛼𝑚−1

(𝑡)

𝑡−𝛼𝑚

(𝑡)

𝑦 (𝑠) 𝑑𝑠

− ∫

𝑡−𝛼𝑚−1

(𝑡)

𝑡−𝛼𝑚

(𝑡)

𝑚(𝑠) 𝑑𝜔 (𝑠) = 0,

𝑥 (𝑡 − 𝛼𝑚(𝑡)) − 𝑥 (𝑡 − 𝑑) − ∫

𝑡−𝛼𝑚

(𝑡)

𝑡−𝑑

𝑦 (𝑠) 𝑑𝑠

− ∫

𝑡−𝛼𝑚

(𝑡)

𝑡−𝑑

𝑚(𝑠) 𝑑𝜔 (𝑠) = 0,

𝑥 (𝑡 − 𝑑) − 𝑥 (𝑡 − (𝑀 − 1) ℎ) + ∫

𝑡−(𝑀−1)ℎ

𝑡−𝑑

𝑦 (𝑠) 𝑑𝑠

+ ∫

𝑡−(𝑀−1)ℎ

𝑡−𝑑

𝑚(𝑠) 𝑑𝜔 (𝑠) = 0,

...

𝑥 (𝑡 − 2ℎ) − 𝑥 (𝑡 − ℎ) + ∫

𝑡−ℎ

𝑡−2ℎ

𝑦 (𝑠) 𝑑𝑠

+ ∫

𝑡−ℎ

𝑡−2ℎ

𝑚(𝑠) 𝑑𝜔 (𝑠) = 0

(25)

which are equivalent to

𝐵𝜉 (𝑡) = (𝐽𝑚+1(𝐼

𝑛) 𝐹) 𝜉 (𝑡)

= (

𝐼𝑛−𝐼

𝑛0 ⋅ ⋅ ⋅ 0 0 0 −𝐼

𝑛0 ⋅ ⋅ ⋅ 0

0 𝐼𝑛−𝐼

𝑛⋅ ⋅ ⋅ 0 0 0 0 −𝐼

𝑛⋅ ⋅ ⋅ 0

...... d d

......

......

... d...

0 0 ⋅ ⋅ ⋅ 𝐼𝑛−𝐼

𝑛0 0 0 0 ⋅ ⋅ ⋅ −𝐼

𝑛

)𝜉(𝑡)

= 0.

(26)

The full column rank matrix representation of the rightorthogonal complement of 𝐵 is denoted by 𝐵⊥, and Lemma 6offers a computation method with 𝑍 = 𝐼

𝑛, 𝐵⊥

= col{−∑

𝐾

𝑗=1𝐹

𝑗, −∑

𝐾

𝑗=2𝐹

𝑗, . . . , −𝐹

𝐾, 𝐼

𝑚}. According to Lemma 7,

𝐸{L𝑉(𝑥(𝑡), 𝑡, 𝑖)} is negative as long as

𝜉𝑇

(𝑡) (𝐵⊥

𝐵 − (Ξ

1+ Ξ

2+ Ξ

30

∗ Ξ4

)) 𝜉 (𝑡) > 0 (27)

holds, which, in other words, can be expressed as (13). Hence,the stochastic system (8) is asymptotical stable in the meansquare, and it completes the proof.

Remark 10. The delay-partitioning projection techniqueemployed in this paper constitutes the major improvementfrom most existing results in the literature. Firstly, it shouldbe pointed out that such technique is very rational. Thereasons are twofold. (1)The properties of subinterval delaysmay be sharply different in many practical situations. Thus,it is not reasonable to combine them together. (2) When𝑑(𝑡) reaches its upper bound, we do not necessarily haveevery subinterval delay reaches its maximum at the sametime. That is to say, if we use an upper bound to boundthe delay 𝑑(𝑡) we have to use the sum of the maximaof subinterval delays; however, 𝑑(𝑡) does not achieve thismaximum value usually. Therefore, by adopting the delay-partitioning projection approach, less conservative condi-tions can be proposed. Secondly, in [46], the central pointof variation of the delay was introduced to study the stabilityfor time-delay systems, which is called the DCP method. Asan extension of the method, we divide the delay into severalsubintervals which permits employing one slightly differentfunction for different subintervals. This treatment makes usutilize more information on the time delay and thus maypresent better criteria with less conservatism. Thirdly, twodifferent partitions are made to deal with the time-varyingdelay in this paper, which are different from the approachin [43]. The parameters 𝑚 and 𝑀 refer to the number ofdelay partitioning, and it indicates that the solution can besearched in a wider space, which leads to the reduction ofconservatism. Finally, using the similar analysis method of[7], it is easy to verify that the conservatism of the conditionsis a nonincreasing function of the number of delay partitions.However, as we all know, the computational complexity willbe increased as the partitioning becomes thinner. Therefore,the delay-partitioning projection approach can provide theflexibility that allows us to trade off between complexity andperformance of the stability analysis.

Page 8: Research Article Stability Analysis of Stochastic ...downloads.hindawi.com/journals/aaa/2013/212469.pdf · major sources of instability in various engineering systems. erefore, how

8 Abstract and Applied Analysis

We now consider that the time-varying delay is nondiffer-entiable or the bound of the delay is unknown, which meansthat the restriction on the delay is removed. For this goal, wemodify the Lyapunov-Krasovskii functional as

�� (𝑥 (𝑡) , 𝑡, 𝑖) = 𝑥𝑇

(𝑡) 𝑃𝑖𝑥 (𝑡) + ∫

𝑡

𝑡−ℎ

𝛾𝑇

(𝑠)𝑊𝛾 (𝑠) 𝑑𝑠

+ ∫

𝑡

𝑡−𝑑

𝑓𝑇

(𝑥 (𝑠)) 𝑅𝑓 (𝑥 (𝑠)) 𝑑𝑠

+ ℎ∫

−ℎ

−𝑑

𝑡

𝑡+𝜃

𝑦𝑇

(𝑠)𝑋1𝑦 (𝑠) 𝑑𝑠 𝑑𝜃

+ ∫

−ℎ

−𝑑

𝑡

𝑡+𝜃

𝑚𝑇

(𝑠)𝑋1𝑚(𝑠) 𝑑𝑠 𝑑𝜃

+ 𝜎∫

0

−𝑑

𝑡

𝑡+𝜃

𝑦𝑇

(𝑠)𝑋2𝑦 (𝑠) 𝑑𝑠 𝑑𝜃

+ ∫

0

−𝑑

𝑡

𝑡+𝜃

𝑚𝑇

(𝑠)𝑋2𝑚(𝑠) 𝑑𝑠 𝑑𝜃.

(28)

The proof can then be derived by following a similar line ofarguments as that inTheorem 9. Besides,𝑓(𝑥(𝑡−𝑑(𝑡))) in 𝜉(𝑡)will turn out to be 𝑓(𝑥(𝑡 − 𝑑)).

Theorem 11. For given constant 𝑑 and two positive integers𝑚and𝑀, the stochastic system (8) is mean square asymptoticallystable if there exist matrices 𝑃

𝑖> 0, 𝑅 > 0, 𝑋

1> 0, 𝑋

2> 0,

and

𝑊 = 𝑊𝑇

= (

𝑊11⋅ ⋅ ⋅ 𝑊

1𝑀

... d...

𝑊𝑇

1𝑀⋅ ⋅ ⋅ 𝑊

𝑀𝑀

) > 0, (29)

and positive scalar 𝑙 > 0 such that the following LMI holds:

𝐵⊥𝑇

(

Ξ1+ Ξ

2+ Ξ

30

∗ Ξ4

)𝐵⊥

< 0, (30)

where

Ξ1=

((((((((((((((

(

Ω10 ⋅ ⋅ ⋅ 𝜀

−1

𝑃𝑖𝐵

𝑖0 𝑊

1𝑀⋅ ⋅ ⋅ 𝑊

12Δ

1

∗ 0 ⋅ ⋅ ⋅ 0 0 0 ⋅ ⋅ ⋅ 0 0

...... d

......

... d...

...∗ ∗ ⋅ ⋅ ⋅ 0 0 0 ⋅ ⋅ ⋅ 0 0

∗ ∗ ⋅ ⋅ ⋅ ∗ Γ𝑀+1

−𝑊𝑇

(𝑀−1)𝑀⋅ ⋅ ⋅ −𝑊

𝑇

1𝑀0

∗ ∗ ⋅ ⋅ ⋅ ∗ ∗ Γ𝑀

⋅ ⋅ ⋅ 𝑊𝑇

2𝑀−𝑊

𝑇

1(𝑀−1)0

...... d

......

... d...

...∗ ∗ ⋅ ⋅ ⋅ ∗ ∗ ∗ ⋅ ⋅ ⋅ Γ

20

∗ ∗ ⋅ ⋅ ⋅ ∗ ∗ ∗ ⋅ ⋅ ⋅ ∗ Δ2

))))))))))))))

)

,

Ω1= −2𝜀

−1

𝑃𝑖𝐴

𝑖+

𝑁

𝑗=1

𝑞𝑖𝑗𝑃

𝑗+𝑊

11− 𝑙��

1, Ξ

4= diag {−𝑅, −𝑋

2, . . . , −𝑋

2, −𝑚

−1

𝑋2, −𝑋

1, . . . , −𝑋

1} ,

(31)

and 𝐵⊥, Δ1, Δ

2, Γ

𝑘, Ξ

2, Ξ

3, ��

1, ��

2are the same as in

Theorem 9.

Remark 12. Obviously, the delay rate is not specified inTheorem 11, which includes the constant time delay as aspecial case.Therefore,Theorem 11 is applicable to the specialcase to derive some results.

Remark 13. In previous work as [47], the authors haveinvestigated the stability of neutral type neural networks withdiscrete and distributed delays. The reason why we do nottake these neural networks into account is to make our ideamore lucid and to avoid complicated notations. However, it isnot difficult to extend our results to the neutral type neuralnetworks with mixed time-varying delays. The results willappear in our following study.

4. A Numerical Example

One illustrative example is presented to show the effective-ness of the theoretical results. For simplicity, the system (8)with Markovian switching between two modes is taken intoconsideration. In addition, the state vector of each node is ofdimension two. In other words, 𝑁 = 2, 𝑛 = 2. The modeswitching is governed by the rate matrix ( −0.4 0.4

0.3 −0.3), and the

other parameters are assumed as

𝐴1= (3.3 0

0 3.4) , 𝐴

2= (4.4 0

0 4.5) ,

𝐵1= (−0.3 −0.6

−0.5 0.5) , 𝐵

2= (−2.2 −0.5

−1.2 1.2) ,

Page 9: Research Article Stability Analysis of Stochastic ...downloads.hindawi.com/journals/aaa/2013/212469.pdf · major sources of instability in various engineering systems. erefore, how

Abstract and Applied Analysis 9

𝐸 = (−1 0

0 −0.1) , 𝐶

1= (0.1 0.2

0.1 0.5) ,

𝐶2= (0.5 1

0.2 0.1) , 𝐷

1= (

0.1 −0.2

−0.5 0.4) ,

𝐷2= (

0.2 −0.5

−0.1 0.2) ,

𝜉 = 0.56, 𝜀 = 1, 𝑑 (𝑡) = 0.8 + 0.5 sin 𝑡𝜋2

(𝑡 ∈ 𝑍) .

(32)

The sector-bounded nonlinear function 𝑓(𝑥(𝑡)) is given asfollows:

𝑓 (𝑥 (𝑡))

=

1

2

(

0.3 (𝑥1

(𝑡) + 𝑥2

(𝑡))

1 + 𝑥2

1(𝑡) + 𝑥

2

2(𝑡)

+ 0.1𝑥1

(𝑡) + 0.1𝑥2

(𝑡) 0.3𝑥1

(𝑡) + 0.3𝑥2

(𝑡))

𝑇

,

(33)

which can be bounded by

𝐾1= (0.2 0.1

0 0.2) , 𝐾

2= (−0.1 0

−0.1 0.1) . (34)

We choose the parameters 𝑚 = 2 and 𝑀 = 2. It is easy toknow 𝑑 = 1.3 and 𝜇 = 0.5.

By using Matlab LMI control toolbox, we can find thefeasible solution of the LMI (13) as follows:

𝑃1= (

1.8392 −0.8471

−0.8471 4.5444) ,

𝑃2= (

2.0534 −0.5930

−0.5930 3.9335) ,

𝑄1= (

0.4422 −0.3769

−0.3769 1.8266) ,

𝑄2= (

7.6178 −2.8997

−2.8997 11.9753) ,

𝑋1= (

0.0594 −0.1562

−0.1562 0.4677) ,

𝑋2= (

0.0663 −0.1841

−0.1841 0.5600) ,

𝑅 = (0.2487 −0.0084

−0.0084 0.5947) ,

𝑊11= (

1.1837 −0.4007

−0.4007 3.6472) ,

𝑊12= (

0.0111 −0.1045

−0.1045 0.4755) ,

𝑊22= (

0.7856 −0.4878

−0.4878 2.4023) .

(35)

The simulation result is shown in Figure 1, which implies thatall the expected system performance requirements are wellachieved.

0 2 4 6 8

−0.3

−0.2

−0.1

0

0.1

0.2

0.3

Time, t

State,x(t)

x

1

x

2

Figure 1: State development of system (8) with parameters (32).

Remark 14. It is worth pointing out that if Prob{𝜉(𝑡) = 1} = 1,then the function 𝑓(𝑥(𝑡)) in (8) will be the neuron activationfunction. According to Assumption 1, 𝑓(𝑥(𝑡)) satisfies thesector-bounded condition which is more general than theLipschitz condition. Therefore, the stability criteria in [42]can not apply to our example.

Remark 15. In our example, many factors such as noise per-turbations,Markovian jump parameters, RONs, and differenttime scales are considered and the delay-partitioning projec-tion approach is employed. Therefore, the results reportedin [9, 23, 38–41] do not hold in our example. Moreover, ourresults are expressed by LMIs, which can be easily checked byusing the powerful Matlab LMI Toolbox. Thus, our stabilitycriteria are more computationally efficient than those givenin [13, 40, 41].

5. Conclusions

In this paper, we have dealt with the mean square asymptoticstability problem for stochastic Markovian jump neuralnetworks with different time scales and RONs. By usingnew Lyapunov-Krasovskii functionals and delay-partitioningprojection approach, the LMI-based criteria have been devel-oped to guarantee stability of such systems. It is also shownthat the delay-partitioning projection approach can achievethe aim of reducing conservativeness. An numerical exampleis exploited to illustrate the effectiveness of the theoreticalresults.

Acknowledgments

This work was supported by the Fundamental ResearchFunds for the Central Universities (JUSRP51317B) and theNational Natural Science Foundation of China under Grant10901073 and 11002061.

Page 10: Research Article Stability Analysis of Stochastic ...downloads.hindawi.com/journals/aaa/2013/212469.pdf · major sources of instability in various engineering systems. erefore, how

10 Abstract and Applied Analysis

References

[1] S. Arik, “Global asymptotic stability of a class of dynamicalneural networks,” IEEE Transactions on Circuits and Systems I,vol. 47, no. 4, pp. 568–571, 2000.

[2] X. S. Yang, J. D. Cao, and J. Q. Lu, “Synchronization of delayedcomplex dynamical networks with impulsive and stochasticeffects,”Nonlinear Analysis: Real World Applications, vol. 12, no.4, pp. 2252–2266, 2011.

[3] J. D. Cao, G. R. Chen, and P. Li, “Global synchronization in anarray of delayed neural networks with hybrid coupling,” IEEETransactions on Systems, Man, and Cybernetics B, vol. 38, no. 2,pp. 488–498, 2008.

[4] J. D. Cao, Z. D. Wang, and Y. H. Sun, “Synchronization inan array of linearly stochastically coupled networks with timedelays,” Physica A, vol. 385, no. 2, pp. 718–728, 2007.

[5] Z. Orman and S. Arik, “An analysis of stability of a class ofneutral-type neural networks with discrete time delays,” Abs-tract and Applied Analysis, vol. 2013, Article ID 143585, 9 pages,2013.

[6] Q. Luo, X. J.Miao,Q.Wei, andZ.X. Zhou, “Stability of impulsiveneural networks with time-varying and distributed delays,”Abstract and Applied Analysis, vol. 2013, Article ID 325310, 10pages, 2013.

[7] X. Y.Meng, J. Lam, B. Z.Du, andH. J. Gao, “A delay-partitioningapproach to the stability analysis of discrete-time systems,”Automatica, vol. 46, no. 3, pp. 610–614, 2010.

[8] O. M. Kwon, M. J. Park, J. H. Park, S. M. Lee, and E. J. Cha,“Improved criteria on delay-dependent stability for discrete-time neural networks with interval time-varying delays,” Abst-ract and Applied Analysis, vol. 2012, Article ID 285931, 16 pages,2012.

[9] B. Du, J. Lam, Z. Shu, and Z. Wang, “A delay-partitioningprojection approach to stability analysis of continuous systemswith multiple delay components,” IET Control Theory andApplications, vol. 3, no. 4, pp. 383–390, 2009.

[10] H. B. Zeng, Y. He, M. Wu, and C. Zhang, “Complete delay-decomposing approach to asymptotic stability for neural net-works with time-varying delays,” IEEE Transactions on NeuralNetworks, vol. 22, no. 5, pp. 806–812, 2011.

[11] B. Z. Du and J. Lam, “Stability analysis of static recurrent neuralnetworks using delay-partitioning and projection,” Neural Net-works, vol. 22, no. 4, pp. 343–347, 2009.

[12] W. M. Chen, Q. Ma, G. Y. Miao, and Y. J. Zhang, “Stabilityanalysis of stochastic neural networks with Markovian jumpparameters using delaypartitioning approach,” Neurocomput-ing, vol. 103, pp. 22–28, 2013.

[13] L. Wan and J. H. Sun, “Mean square exponential stability ofstochastic delayed Hopfield neural networks,” Physics Letters A,vol. 343, no. 4, pp. 306–318, 2005.

[14] H. Y. Shao and Q. L. Han, “New stability criteria for lineardiscrete-time systems with interval-like time-varying delays,”IEEE Transactions on Automatic Control, vol. 56, no. 3, pp. 619–625, 2011.

[15] S. Arik, “Stability analysis of delayed neural networks,” IEEETransactions on Circuits and Systems I, vol. 47, no. 7, pp. 1089–1092, 2000.

[16] Y. Chen andW.X. Zheng, “Stochastic state estimation for neuralnetworks with distributed delays and Markovian jump,” NeuralNetworks, vol. 25, pp. 14–20, 2012.

[17] S. P. Wen, Z. G. Zeng, and T. W. Huang, “Reliable H filterdesign for a class of mixed-delay Markovian jump systems with

stochastic nonlinearities and multiplicative noises via delay-partitiong method,” International Journal of Control, Automa-tion and Systems, vol. 10, no. 4, pp. 711–720, 2012.

[18] Y. Kang, W. K. Shang, and H. S. Xi, “Estimating the delay-time for the stability of Markovian jump bilinear systems withsaturating actuators,” Acta Automatica Sinica, vol. 36, no. 5, pp.762–767, 2010.

[19] L. L. Xiong, J. K. Tian, and X. Z. Liu, “Stability analysis forneutral Markovian jump systems with partially unknown tra-nsition probabilities,” Journal of the Franklin Institute, vol. 349,no. 6, pp. 2193–2214, 2012.

[20] Q. X. Zhu and J. D. Cao, “Stability of Markovian jump neuralnetworks with impulse control and time varying delays,” Non-linear Analysis: Real World Applications, vol. 13, no. 5, pp. 2259–2270, 2012.

[21] Q. Ma, S. Y. Xu, Y. Zou, and J. J. Lu, “Stability of stochas-tic Markovian jump neural networks with mode-dependentdelays,” Neurocomputing, vol. 74, no. 12-13, pp. 2157–2163, 2011.

[22] Q. X. Zhu and J. D. Cao, “Stability analysis of Markovian jumpstochastic BAM neural networks with impulse control andmixed time delays,” IEEE Transactions on Neural Networks andLearning Systems, vol. 23, no. 3, pp. 467–479, 2010.

[23] H. Y. Li, Q. Zhou, B. Chen, and H. H. Liu, “Parameter-dependent robust stability for uncertain Markovian jump sys-tems with time delay,” Journal of the Franklin Institute, vol. 348,no. 4, pp. 738–748, 2011.

[24] Y. F. Guo and Z. J. Wang, “Stability of Markovian jump systemswith generally uncertain transition rates,” Nonlinear Dynamics,vol. 350, pp. 2826–2836, 2013.

[25] Z. D. Wang, Y. Wang, and Y. R. Liu, “Global synchronizationfor discrete-time stochastic complex networks with randomlyoccurred nonlinearities and mixed time delays,” IEEE Transac-tions on Neural Networks, vol. 21, no. 1, pp. 11–25, 2010.

[26] Y. S. Liu, Z. D. Wang, and W. Wang, “Reliable H∞

filtering fordiscrete time-delay systemswith randomly occurred nonlinear-ities via delay-partitioning method,” Signal Processing, vol. 91,no. 4, pp. 713–727, 2011.

[27] L. Arnold, Stochastic Differential Equations: Theory and Appli-cations, Wiley-Interscience, London, UK, 1974.

[28] B. Øksendal, Stochastic Differential Equations: An Introduc-tion with Applications, Springer, Amsterdam, The Netherlands,2006.

[29] C. Wang and Y. Shen, “Delay partitioning approach to robuststability analysis for uncertain stochastic systems with intervaltime-varying delay,” IET ControlTheory and Applications, vol. 6,no. 7, pp. 875–883, 2012.

[30] J. Yu, K. Zhang, and S. Fei, “Exponential stability criteriafor discrete-time recurrent neural networks with time-varyingdelay,”Nonlinear Analysis: RealWorld Applications, vol. 11, no. 1,pp. 207–216, 2010.

[31] Y. He, J. H. She, andM.Wu, “New delay-dependent stability cri-teria and stabilizingmethod for neutral systems,” IEEE Transac-tions on Automatic Control, vol. 49, no. 12, pp. 2266–2271, 2004.

[32] C. G. Li, L. N. Chen, and K. Aihara, “Stability of genetic net-works with SUM regulatory logic: Lur’e system and LMI appro-ach,” IEEE Transactions on Circuits and Systems I, vol. 53, no. 11,pp. 2451–2458, 2006.

[33] H. Huang and G. Feng, “State estimation of recurrent neuralnetworks with time-varying delay: a novel delay partitionapproach,” Neurocomputing, vol. 74, no. 5, pp. 792–796, 2011.

Page 11: Research Article Stability Analysis of Stochastic ...downloads.hindawi.com/journals/aaa/2013/212469.pdf · major sources of instability in various engineering systems. erefore, how

Abstract and Applied Analysis 11

[34] G. F. Song and Z. Wang, “A delay partitioning approach tooutput feedback control for uncertain discrete time-delay sys-tems with actuator saturation,”Nonlinear Dynamics, vol. 74, no.1-2, pp. 189–202, 2013.

[35] Q. T. Gan, “Synchronization of competitive neural networkswith diffetent time scales and time-varying delay based ondelay partitioning approach,” International Journal of MachineLearning and Cybernetics, vol. 4, no. 4, pp. 327–337, 2013.

[36] Q. X. Cheng and J. D. Cao, “Global synchronization of complexnetworks with discrete time delays on time scales,” DiscreteDynamics in Nature and Society, vol. 2011, Article ID 287670,19 pages, 2011.

[37] Y. P. Ren and Y. K. Li, “Stability and existence of periodicsolutions for cellular neural networks with state dependentdelays on time scales,”Discrete Dynamics in Nature and Society,vol. 2012, Article ID 386706, 14 pages, 2012.

[38] Y. Zhao, H. J. Gao, J. Lam, and B. Z. Du, “Stability and stabi-lization of delayed T-S fuzzy systems: a delay partitioningapproach,” IEEE Transactions on Fuzzy Systems, vol. 17, no. 4,pp. 750–762, 2009.

[39] J. Lam, H. J. Gao, and C. H. Wang, “Stability analysis forcontinuous systems with two additive time-varying delay com-ponents,” Systems and Control Letters, vol. 56, no. 1, pp. 16–24,2007.

[40] Q. L. Han and K. Q. Gu, “Stability of linear systems with time-varying delay: a generalized discretized lyapunov functionalapproach,” Asian Journal of Control, vol. 3, no. 3, pp. 170–180,2001.

[41] Q. S. Liu, J. Wang, and J. D. Cao, “A delayed lagrangian net-work for solving quadratic programming problems with equal-ity constraints,” in Advances in Neural Networks—ISNN 2006,vol. 3971 of Lecture Notes in Computer Science, pp. 369–378,Springer, New York, NY, USA, 2006.

[42] H. Y. Liu, L. Zhao, Z. X. Zhang, and Y. Ou, “Stochastic stabilityof Markovian jumping Hopfield neural networks with constantand distributed delays,” Neurocomputing, vol. 72, no. 16–18, pp.3669–3674, 2009.

[43] J. M. Duan, M. F. Hu, Y. Q. Yang, and L. X. Liu, “A delaypartitioning projection approach to stability analysis of stochas-tic Markovian jump neural networks with randomly occurrednonlinearities,” Neurocomputing. In press.

[44] J. M. Duan, M. F. Hu, and Y. Q. Yang, “A delay-partitioningapproach to stability analysis of discrete-time recurrent neuralnetworks with randomly occurred nonlinearities,” in Advancesin Neural Networks—ISNN 2013, vol. 7951, pp. 197–204,Springer, New York, NY, USA, 2013.

[45] Y. Q. Xia, M. Y. Fu, and P. Shi, Analysis and Synthesis ofDynamical Systems with Time-Delays, vol. 387 of Lecture Notesin Control and Information Sciences, Springer, New York, NY,USA, 2009.

[46] C. Peng and Y. C. Tian, “Improved delay-dependent robuststability criteria for uncertain systems with interval time-varying delay,” IET Control Theory and Applications, vol. 2, no.9, pp. 752–761, 2008.

[47] S. Lakshmanan, J. H. Park, H. Y. Jung, O. M. Kwon, and R.Rakkiyappan, “A delay partitioning approach to delay-depe-ndent stability analysis for neutral type neural networks withdiscrete and distributed delays,” Neurocomputing, vol. 111, pp.81–89, 2013.

Page 12: Research Article Stability Analysis of Stochastic ...downloads.hindawi.com/journals/aaa/2013/212469.pdf · major sources of instability in various engineering systems. erefore, how

Submit your manuscripts athttp://www.hindawi.com

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttp://www.hindawi.com

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Journal of

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

CombinatoricsHindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

International Journal of

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

The Scientific World JournalHindawi Publishing Corporation http://www.hindawi.com Volume 2014

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttp://www.hindawi.com

Volume 2014 Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Stochastic AnalysisInternational Journal of