Stochastic stability of Markovian jump BAM neural networks with

Transcription

Stochastic stability of Markovian jump BAM neural networks with
Neurocomputing 136 (2014) 136–151
Contents lists available at ScienceDirect
Neurocomputing
journal homepage: www.elsevier.com/locate/neucom
Stochastic stability of Markovian jump BAM neural networks with
leakage delays and impulse control$
Quanxin Zhu a,n, R. Rakkiyappan b, A. Chandrasekar b
a
b
School of Mathematical Sciences and Institute of Finance and Statistics, Nanjing Normal University, Nanjing 210023, Jiangsu, China
Department of Mathematics, Bharathiar University, Coimbatore 641046, India
art ic l e i nf o
a b s t r a c t
Article history:
Received 9 August 2013
Received in revised form
30 October 2013
Accepted 2 January 2014
Communicated by Y. Liu
Available online 29 January 2014
This paper deals with the globally exponential stability of impulsive bidirectional associative memory
(BAM) neural networks with both Markovian jump parameters and mixed time delays. The jumping
parameters are determined by a continuous-time, discrete-state Markov chain. Different from the
previous literature, the mixed time delays considered here comprise discrete, distributed and leakage
time-varying delays. By using the Lyapunov–Krasovskii functional having triple integral terms and model
transformation technique, some novel sufficient delay-dependent conditions are derived to ensure the
globally exponential stability in the mean square of the suggested system. Moreover, the derivatives of
time delays are not necessarily zero or smaller than one since several free matrices are introduced in our
results. Finally, a numerical example and its simulations are provided to demonstrate the effectiveness of
the theoretical results.
& 2014 Elsevier B.V. All rights reserved.
Keywords:
Markovian jump BAM neural networks
Global exponential stability
Leakage delays
Lyapunov–Krasovskii functional
Impulse control
1. Introduction
As is well known, bidirectional associative memory (BAM) neural networks belong to a class of two-layer hetero-associative networks because
they generalize the single-layer auto-associative Hebbian correlator to a two-layer pattern-matched hetero-associative circuit. In fact, they are
composed of neurons arranged in two layers, the U-layer and the V-layer. Moreover, the neurons in one layer are fully interconnected to the
neurons in the other layer, but there may be no interconnection among neurons in the same layer. As a consequence, the addressable memories or
patterns of BAM neural networks can be stored with a two-way associative search. These advantages have drawn a great deal of attention. Indeed,
since Kosko first proposed BAM neural networks in [1–3], this class of neural systems has been extensively studied, and subsequently applied to
many areas such as pattern recognition, signal and imagine processing, automatic control, associative memory and artificial intelligence [4–9].
However, these applications heavily depend on the stability of the equilibrium point of BAM neural networks since the stability is the first
requirement in modern control theories. Therefore, it is important to discuss the stability issue of BAM neural networks.
In practical situations, time delays are often encountered and inevitable in biological and artificial neural networks because of the finite
switching speed of amplifiers. For example, one can find the circuit diagram and connection pattern implementing for the delayed BAM neural
networks. As we know, the existence of time delays may cause an oscillation or instability in neural networks, which is harmful to the
applications of neural networks. Thus, there is a need of stability analysis for neural networks with time delays, which are usually called
delayed neural networks. Recently, various sufficient conditions for the stability of delayed neural networks have been proposed, either delayindependent or delay-dependent. Generally speaking, delay-independent criteria are more conservative than delay-dependent criteria,
especially when the size of delay is small, and so much attention has been paid to the latter ones.
However, besides delay effects, impulsive perturbations widely exist in many fields, such as biology and medicine, economics,
electronics and telecommunications [10–12]. Especially in real neural networks, impulsive perturbations are likely to emerge since the
states of neural networks are changed abruptly at certain moments of time in the above fields. Thus, impulsive perturbations should be
taken into account when investigating the stability of neural networks. It is worth pointing out that an impulsive neural network model
☆
This work was jointly supported by the National Natural Science Foundation of China (61374080), the Natural Science Foundation of Zhejiang Province (LY12F03010), the
Natural Science Foundation of Ningbo (2012A610032).
n
Corresponding author.
E-mail address: [email protected] (Q. Zhu).
0925-2312/$ - see front matter & 2014 Elsevier B.V. All rights reserved.
http://dx.doi.org/10.1016/j.neucom.2014.01.018
Q. Zhu et al. / Neurocomputing 136 (2014) 136–151
137
belongs to a new category of dynamical systems, which is neither purely continuous-time nor purely discrete-time. This leads to a great
difficulty of the stability analysis for a class of impulsive neural networks.
On the other hand, systems with Markovian jumping parameters can be described by a set of linear systems with the transitions between
models resolved by a continuous-time discrete state homogeneous Markov process [13]. Usually, neural networks in real life have
a phenomenon of information latching, and the abrupt phenomena such as random failures or repairs of the components, sudden
environmental changes, changing subsystem interconnections, etc. To cope with this situation, neural networks with Markovian jumping
parameters, which are also called Markovian jump neural networks, have been widely used to model the above complex systems. Generally
speaking, a Markovian jump neural network is a hybrid system with a state vector that has two components x(t) and r(t), whereas the first
component x(t) is referred to as the state, and the second component r(t) is a continuous-time Markov chain with a finite state space
S ¼ f1; 2; …; Ng, which is usually regarded as the mode. In its operation, this class of neural networks will switch from one mode to another in
a random way, which is determined by a continuous-time Markov chain r(t). Hence, it is interesting and challenging to study Markovian
jump neural networks.
Recently, there have been a large number of results on the stability of Markovian jump neural networks reported in the literature, for
instance, see [14–27] and references therein. In [19], the authors studied the stability analysis for Markovian jump BAM neural networks
with impulse control and mixed time delays without leakage term. But the leakage term has a great impact on the dynamical behavior of
neural networks. In fact, leakage delays have a tendency to destabilize the neural networks and they are usually not easy to handle. So it is
necessary to consider the effect of leakage delays when studying the stability of neural networks. It is inspiring that the effect of leakage
delays in neural networks has led to a new research topic in recent years [28–35]. For example, Gopalsamy [30] studied the dynamics of
bidirectional associative memory (BAM) network by using contraction mapping theorem and suitable degenerate Lyapunov–Krasovskii
functional together with some differential inequalities and linear matrix inequality (LMI) technique. In [32], the authors investigated the
stability of BAM fuzzy neural networks with delays in the leakage terms. Peng [33] discussed the globally attractive periodic solutions of
BAM neural networks with continuously distributed delays in the leakage terms. In [34], the authors studied the stability analysis of
recurrent neural networks with time delay in the leakage term under impulsive perturbations by using the Lyapunov–Krasovskii
functional and LMI technique. However, to the best of our knowledge, there was no published papers on the stability analysis for a class of
Markovian jump neural networks with leakage delays. This leads to our present research.
Motivated by the above discussion, in this paper we study the stability issue of Markovian jump BAM neural networks with mixed time
delays and impulse control. Different from the previous literature, the mixed time delays considered here comprise discrete, distributed
and leakage time-varying delays. Some novel sufficient conditions of globally exponential stability in the mean square are obtained by
using the Lyapunov–Krasovskii functional having triple integral terms and model transformation technique. In particular, our conditions
are delay-dependent and expressed in terms of LMIs, which can be calculated by Matlab LMI toolbox [36]. Finally, we use a numerical
example and its simulations to illustrate the effectiveness of the theoretical results.
The rest of this paper is organized as follows. In Section 2, we introduce a new class of Markovian jump BAM neural networks with both
impulsive perturbations and leakage time varying delays, and give some necessary assumptions and preliminaries. Our main results are
presented in Section 3. In Section 4, a numerical example and its simulations are given to show the effectiveness of the obtained results.
Finally, in Section 5, the paper is concluded with some general remarks.
Notations: Throughout this paper, Rn and Rnn denote the n-dimensional Euclidean space and the set of all n n real matrices,
respectively. The superscript “T” denotes the transpose of a matrix or vector. TrðÞ denotes the trace of the corresponding matrix. For square
matrices X and Y, the notation X 4 ð Z; o ; r ÞY denotes X Y is a positive-definite (positive-semi-definite, negative, negative-semidefinite) matrix. Let ðΩ; F ; PÞ be a complete probability space with a natural filtration fF t gt Z 0 and E½ stand for the correspondent
expectation operator with respect to the given probability measure P. Also, let τ 4 0, δ 4 0 and C 2F 0 ð½ τ; 0Þ; Rn Þ; C 2F 0 ð½ δ; 0Þ; Rn Þ denote
the family of continuously differentiable functions ϕ from ½ τ; 0 to Rn and ψ from ½ δ; 0 to Rn with the uniform norms J ϕ J ¼
sup δ r θ r 0 jϕðθÞj, J ψ J ¼ sup τ r θ r 0jψ ðθÞj, respectively.
2. Problem description and preliminaries
Consider the following class of Markovian jump BAM neural networks with both impulsive perturbations and leakage time varying
delays:
8
ð1Þ
ð2Þ
>
_ ¼ CðrðtÞÞuðt s1 ðtÞÞ þ AðrðtÞÞf~ ðvðtÞÞ þ BðrðtÞÞf~ ðvðt τðtÞÞÞ þ I;
uðtÞ
>
>
n
o
>
>
R
>
< Δuðt k Þ ¼ uðt k Þ uðt Þ ¼ Dk ðrðtÞÞ uðt Þ CðrðtÞÞ t k
k
k
t k s1 ðt k Þ uðsÞ ds ;
>
>
_ ¼ EðrðtÞÞvðt s2 ðtÞÞ þ FðrðtÞÞg~ ð1Þ ðuðtÞÞ þ GðrðtÞÞg~ ð2Þ ðuðt δðtÞÞÞ þ J;
vðtÞ
>
>
>
R
>
: Δvðt k Þ ¼ vðt k Þ vðt Þ ¼ H k ðrðtÞÞfvðt Þ EðrðtÞÞ tk
k
k
t k s2 ðt k Þ vðsÞ dsg;
t a tk
kAZþ ;
ð1Þ
t a tk
kAZþ ;
for t 4 0 and k ¼ 1; 2; …, where uðtÞ ¼ ½u1 ðtÞ; u2 ðtÞ; …; un ðtÞT and vðtÞ ¼ ½v1 ðtÞ; v2 ðtÞ; …; vn ðtÞT are state vectors associated with the n neurons.
I ¼ ½I 1 ; I 2 ; …; I n and J ¼ ½J 1 ; J 2 ; …; J n denote the constant external inputs.
The diagonal matrices CðrðtÞÞ ¼ diagðc1 ðrðtÞÞ; c2 ðrðtÞÞ; …; cn ðrðtÞÞÞ and EðrðtÞÞ ¼ diagðe1 ðrðtÞÞ; e2 ðrðtÞÞ; …; en ðrðtÞÞÞ have positive entries
ci ðrðtÞÞ 4 0, ei ðrðtÞÞ 4 0 ði ¼ 1; 2; …; nÞ, respectively; the matrices AðrðtÞÞ ¼ ðaij ðrðtÞÞÞnn ; FðrðtÞÞ ¼ ðf ij ðrðtÞÞÞnn ; BðrðtÞÞ ¼ ðbij ðrðtÞÞÞnn and
GðrðtÞÞ ¼ ðg ij ðrðtÞÞÞnn are the time varying delay connection weight matrix, and the distributed delay connection weight matrix,
respectively; fij and gij denote the signal functions of the ith neuron and the jth neuron at time t, respectively; and τðtÞ, δðtÞ and s1 ðtÞ,
~ are the activation functions; let frðtÞ; t Z 0g be a
s2 ðtÞ are time varying delays and time varying leakage delays, respectively; f~ ðÞ and gðÞ
right-continuous Markov chain on a complete probability space ðΩ; F ; PÞ taking values in a finite state space S ¼ f1; 2; …; Ng with generator
138
Q. Zhu et al. / Neurocomputing 136 (2014) 136–151
Γ ¼ ðδij ÞNN given by
(
Pfrðt þ ΔtÞ ¼ jjrðtÞ ¼ ig ¼
δij Δt þ oðΔtÞ;
ia j;
1 þ δii Δt þ oðΔtÞ;
i ¼ j;
where Δt 4 0 and limΔt-0 oðΔtÞ=Δt ¼ 0. Here, δij Z0 is the transition rate from i to j, if i aj while δii ¼ ∑j a i δij . Also, Dk ðrðtÞÞ, H k ðrðtÞÞ are
the impulses gain matrices at the moment of time tk. The discrete set ft k g satisfies 0 ¼ t 0 o t 1 o⋯ o t k o ⋯ , limk-1 t k ¼ 1. uðt k Þ and vðt k Þ
denote the left-hand limits at tk. Similarly, uðt kþ Þ and vðt kþ Þ denote the right-hand limits at tk.
In this paper, we assume that the processes u(t) and v(t) are right-continuous, i.e., uðt kþ Þ ¼ uðt k Þ and vðt kþ Þ ¼ vðt k Þ. Moreover, we assume
that (1) has a unique equilibrium point. In addition, to study the stability, we also need the following assumptions.
þ
þ
þ
; u~ i2
; …; u~ in
Þ, U iþ ¼ diagðu~ i1
; u~ i2
; …; u~ in
Þ, V i ¼ diagðv~ i1
; v~ i2
; …; v~ in
Þ,
Assumption 2.1. There exist four diagonal matrices U i ¼ diagðu~ i1
þ
þ
þ
þ
~
~
~
V i ¼ diagðv i1 ; v i2 ; …; v in Þ, i ¼1,2 satisfying
u~ ij r
v~ ij r
ðiÞ
ðiÞ
f~ j ðγ 1 Þ f~ j ðγ 2 Þ
γ1 γ2
r u~ ijþ
~ ðiÞ
g~ ðiÞ
j ðγ 1 Þ g j ðγ 2 Þ
γ1 γ2
r v~ ijþ
ði ¼ 1; 2Þ
ði ¼ 1; 2Þ
for all γ 1 ; γ 2 A R, γ 1 a γ 2 , j ¼ 1; 2; …; n.
Assumption 2.2. There exist constants τ1 , τ2 , δ1 , δ2 , s1 , s2 , τμ , δμ , sμ1 , sμ2 such that 0 r τ1 r τðtÞ r τ2 , 0 r δ1 r δðtÞ r δ2 , 0 r s1 ðtÞ r s1 ,
0 r s2 ðtÞ r s2 , τ_ ðtÞ r τμ , δ_ ðtÞ r δμ , js_ 1 ðtÞj r sμ1 , js_ 2 ðtÞj r sμ2 , τ ¼ maxfτ2 ; s2 ; τμ ; sμ2 g, δ ¼ maxfδ2 ; s1 ; δμ ; sμ1 g.
It is known from the stability theory of delayed neural networks that Assumption 2.1 implies that the existence of equilibrium point for
system (1). Let un ¼ ðun1 ; un2 ; …; unn ÞT A Rn and vn ¼ ðvn1 ; vn2 ; …; vnn ÞT A Rn be the equilibrium points of the system, and then we shift the
equilibrium points to the origin by the transformation xðtÞ ¼ uðtÞ un ; yðtÞ ¼ vðtÞ vn . Thus, we can rewrite system (1) as follows:
8
_ ¼ CðrðtÞÞxðt s1 ðtÞÞ þ AðrðtÞÞf ð1Þ ðyðtÞÞ þ BðrðtÞÞf ð2Þ ðyðt τðtÞÞÞ;
t a tk
xðtÞ
>
>
>
n
o
>
R tk
>
>
< Δxðt k Þ ¼ xðt k Þ xðt k Þ ¼ Dk ðrðtÞÞ xðt k Þ CðrðtÞÞ t k s1 ðtk Þ xðsÞ ds ; k A Z þ ;
ð2Þ
>
_ ¼ EðrðtÞÞyðt s2 ðtÞÞ þ FðrðtÞÞg ð1Þ ðxðtÞÞ þ GðrðtÞÞg ð2Þ ðxðt δðtÞÞÞ;
t a tk
yðtÞ
>
>
n
o
>
R
>
>
: Δyðt k Þ ¼ yðt k Þ yðt k Þ ¼ H k ðrðtÞÞ yðt k Þ EðrðtÞÞ ttk s ðt Þ yðsÞ ds ; k A Z þ ;
2 k
k
where
f
ð1Þ
ð1Þ
ð1Þ
ð1Þ
ðyÞ ¼ ½f 1 ðyÞ; f 2 ðyÞ; …; f n ðyÞT ,
f
ð2Þ
ð2Þ
ð2Þ
ð2Þ
ðyÞ ¼ ½f 1 ðyÞ; f 2 ðyÞ; …; f n ðyÞT ,
ð1Þ
ð1Þ
f i ðyðtÞÞ ¼ f~ i ðvðtÞ þ vn Þ f~ i ðvn Þ,
ð1Þ
ð2Þ
f i ðyðtÞÞ ¼ f~ i ðvðtÞ þ vn Þ ð2Þ
ð2Þ
ð1Þ
ð1Þ
ð2Þ
ð2Þ
ð2Þ
ð1Þ
T
T
ð2Þ
n
n
~ ð1Þ
~ ð1Þ n ð2Þ
~ ð2Þ
f~ i ðvn Þ, g ð1Þ ðxÞ ¼ ½g ð1Þ
1 ðxÞ; g 2 ðxÞ; …; g n ðxÞ , g ðxÞ ¼ ½g 1 ðxÞ; g 2 ðxÞ; …; g n ðxÞ , g i ðxðtÞÞ ¼ g i ðuðtÞ þ u Þ g i ðu Þ; g i ðxðtÞÞ ¼ g i ðuðtÞ þ u Þ n
g~ ð2Þ
i ðu Þ ði ¼ 1; 2; …; nÞ; f 1 ð0Þ ¼ f 2 ð0Þ ¼ g 1 ð0Þ ¼ g 2 ð0Þ 0.
Let ðxðt; ϕÞ; yðt; ψ ÞÞ be the state trajectory of system (2) from the initial data ϕ A C 2F 0 ð½ τ; 0; Rn Þ, ψ A C 2F 0 ð½ δ; 0; Rn Þ. Clearly, (2) admits
a trivial solution ðxðt; 0Þ; yðt; 0ÞÞ 0 corresponding to the initial data ϕ ¼ 0; ψ ¼ 0. For simplicity, we write ðxðt; ϕÞ; yðt; ψ ÞÞ ðxðtÞ; yðtÞÞ.
Before giving the main results, the following definitions and lemmas are introduced.
Definition 2.3 (Zhu and Cao [19]). The trivial solution of Eq. (1) [or Eq. (2) equivalently] is said to be globally exponentially stable in the
mean square if for every ϕ A C 2F 0 ð½ δ; 0; Rn Þ; ψ A C 2F 0 ð½ τ; 0; Rn Þ, there exist scalars γ 1 4 0; γ 2 4 0; β 4 0, and γ 4 0 such that the following
inequality holds:
"
#
E½jxðt; ϕÞj2 þjyðt; ψ Þj2 r γ e βt γ 1
sup
δrθ r0
EjϕðθÞj2 þ γ 2
sup
τrθr0
Ejψ ðθÞj2 :
Definition 2.4 (Zhu and Cao [19]). The function J : ½t 0 ; 1Þ Rn Rn S-R þ belongs to class Ψ0 if: (1) the function J is continuous on
each of the sets ½t k 1 ; t k Þ Rn Rn S and for all t Z t 0 , Jðt; 0; 0; iÞ 0; i A S; (2) Jðt; x; y; iÞ is locally Lipschitzian in x; y A Rn , i A S; (3) for each
k ¼ 1; 2; … and i1 ; i2 A S, there exist finite limits
lim
Jðt; z1 ; z2 ; i1 Þ ¼ Jðt k ; x; y; i1 Þ
ðt;z1 ;z2 ;i1 Þ-ðt k ;x;y;i1 Þ
and
lim
Jðt; z1 ; z2 ; i2 Þ ¼ Jðt kþ ; x; y; i2 Þ;
ðt;z1 ;z2 ;i2 Þ-ðt kþ ;x;y;i2 Þ
with Jðt kþ ; x; y; i2 Þ ¼ Jðt k ; x; y; i2 Þ satisfied.
For convenience, when rðtÞ ¼ i, i A S, the matrices CðrðtÞÞ; AðrðtÞÞ, BðrðtÞÞ, FðrðtÞÞ, GðrðtÞÞ, Dk ðrðtÞÞ, H k ðrðtÞÞ are denoted as C i , Ai , Bi , F i , Gi , Dik ,
H ik , respectively. Let C 21 ðR þ Rn Rn S; R þ Þ denote the family of all nonnegative functions Jðt; x; y; iÞ A Ψ 0 on R þ Rn Rn S, which
are continuously twice differentiable in x; y, and differentiable in t.
Lemma 2.5 (Moon et al. [37]). Assume that pðsÞ A Rna , qðsÞ A Rnb are given for s A Ω and N A Rna nb . Then for any matrices X A Rna na , Y A Rna nb
and Z A Rnb nb satisfying
X Y
Z 0;
T
Z
Y
Q. Zhu et al. / Neurocomputing 136 (2014) 136–151
the following inequality holds:
Z
Z " pðsÞ #T X
T
2 p ðsÞNqðsÞ ds r
T
qðsÞ
NT
Y
Ω
Ω
Y N
Z
"
pðsÞ
139
#
qðsÞ
ds:
Lemma 2.6 (Gu et al. [38]). Given any real matrix M ¼ M T 4 0 with an appropriate dimension and a vector function ΦðÞ : ½a; b-Rn , such that
the integrations concerned are well defined, then
!T
!
Z
Z
Z
b
a
b
ΦðsÞ ds
M
a
ΦðsÞ ds r ðb aÞ
b
a
ΦT ðsÞMΦðsÞ ds:
Lemma 2.7 (Schur complement). Given constant matrices Ω1 , Ω2 and
then
Ω3 with appropriate dimensions, where ΩT1 ¼ Ω1 and ΩT2 ¼ Ω2 40,
Ω1 þ ΩT3 Ω2 1 Ω3 o 0
if and only if
"
#
T
Ω1
n
Ω3
o 0 or
Ω2
"
Ω2
n
#
Ω3
o0:
Ω1
3. Main results
In this section, under Assumptions 2.1 and 2.2, we will investigate the global exponential stability in the mean square of the
equilibrium point for system (2).
Theorem 3.1. Let β0 be a fixed positive constant and assume that Assumptions 2.1 and 2.2 hold. Then the equilibrium point of Eq. (2) is globally
exponentially stable in the mean square, if there exist positive definite matrices X 1 , X 2 , Q 1 , Q 2 , Q 3 , Q 4 , Q 5 , Q 6 , Q 7 , R1 , R2 , R3 , R4 , R5 , R6 , R7 , S1 , S2 ,
S3 , S4 , S5 , S6 , Z 1 , Z 2 , P 1i , P 2i , T i , W i , i A S, the matrices ðQn11 QQ 12
Þ 4 0, ðRn11 RR12
Þ 4 0 and positive diagonal matrices M 1 , M 2 , M 3 , M 4 , any appropriate
22
22
dimensions matrices Y 1 , Y 2 , N 1 , N2 , N 3 , O1 , O2 , O3 such that the following linear matrix inequalities (LMIs) hold:
2
3
Π i W 1i W 2i
6 n T
0 7
ð3Þ
4
5 o 0;
i
n
n
Wi
"
"
"
"
P 1i
ðI Dik ÞP 1l
n
P 1l
P 2i
ðI H ik ÞP 2l
n
P 2l
X1
Y1
n
Z1
X2
Y2
n
Z2
#
Z0
½here rðt k Þ ¼ l;
ð4Þ
Z0
½here rðt k Þ ¼ l;
ð5Þ
#
#
Z0;
ð6Þ
Z0;
ð7Þ
#
where the symbol “n” denotes the symmetric term of the matrix,
Π i ¼ ðΛi;j Þ2626 ;
N
Λ1;1 ¼ β0 P 1i þ ∑ δij P 1j P 1i C i C Ti P 1i þ τ2 X 1 þ Q 3 þ R1 þ R2 þR11 þ s21 eβ0 s1 Q 4 Q 5 ðδ2 δ1 ÞQ 6 4δ22 eβ0 δ2 S2
j¼1
4ðδ2 δ1 Þ2 eβ0 ðδ2 δ1 Þ S4 4s21 eβ0 s1 S5 þ V 1 M 3 V 1 þ ðN 1 þ N T1 Þ;
Λ1;7 ¼ R12 ;
Λ1;9 ¼ 4eβ0 δ2 δ2 S2 ;
Λ1;10 ¼ 4ðδ2 δ1 Þeβ0 ðδ2 δ1 Þ S4 ;
Λ1;2 ¼ Q 5 ;
Λ1;4 ¼ ðδ2 δ1 ÞQ 6 ;
Λ1;11 ¼ 4s1 eβ0 s1 S5 ;
Λ1;6 ¼ N 1 þ NT2 ;
N
Λ1;12 ¼ β0 P 1i C i ∑ δij P 1j C j þC Ti P 1i C i ;
j¼1
Λ1;13 ¼ Y 1 þY T2 ; Λ1;18 ¼ Y 1 ; Λ1;19 ¼ P 1i Ai ; Λ1;20 ¼ P 1i Bi ; Λ1;25 ¼ N 1 þ N T3 ; Λ2;2 ¼ Q 5 ;
Λ3;3 ¼ e β0 δ1 R1 e β0 δ1 Q 7 ; Λ3;4 ¼ e β0 δ1 Q 7 ; Λ4;4 ¼ e β0 δ2 R2 e β0 δ1 Q 7 ðδ2 δ1 ÞQ 6 ;
Λ5;5 ¼ T i sμ1 e β0 s1 Q 3 ð1 sμ1 Þ þ s21 eβ0 s1 C Ti Q 5 C i þ δ22 ðδ2 δ1 Þeβ0 δ2 C Ti Q 6 C i þ ðδ2 δ1 Þ2 eβ0 ðδ2 δ1 Þ C Ti Q 7 C i k2 C Ti S2 C i k4 C Ti S4 C i k5 C Ti S5 C i
2
þ δ2 eβ0 δ2 C Ti Z 2 C i ; Λ5;12 ¼ sμ1 C Ti P 1i C i ; Λ5;19 ¼ s21 eβ0 s1 C Ti Q 5 Ai δ2 ðδ2 δ1 Þeβ0 δ2 C Ti Q 6 Ai
ðδ2 δ1 Þ2 eβ0 ðδ2 δ1 Þ C Ti Q 7 Ai þ k2 C Ti S2 Ai þ k4 C Ti S4 Ai þk5 C Ti S5 Ai δ2 eβ0 δ2 C Ti Z 2 Ai ;
Λ5;20 ¼ s21 eβ0 s1 C Ti Q 5 Bi δ22 ðδ2 δ1 Þeβ0 δ2 C Ti Q 6 Bi ðδ2 δ1 Þ2 eβ0 ðδ2 δ1 Þ C Ti Q 7 Bi
þk2 C Ti S2 Bi þ k4 C Ti S4 Bi þ k5 C Ti S5 Bi δ2 eβ0 δ2 C Ti Z 2 Bi ; Λ6;6 ¼ e β0 δ2 R11 þ V 2 M 4 V 2 ðN 2 þ N T2 Þ;
Λ6;8 ¼ e β0 δ2 R12 ; Λ6;13 ¼ Y T2 ; Λ6;25 ¼ N 2 N T3 ; Λ7;7 ¼ R22 þ s22 eβ0 s2 F Ti R5 F i þ τ22 ðτ2 τ1 Þeβ0 τ2 F Ti R6 F i
þðτ2 τ1 Þ2 eβ0 ðτ2 τ1 Þ F Ti R7 F i k1 F Ti S1 F i k3 F Ti S3 F i k6 F Ti S6 F i þ τ2 eβ0 τ2 F Ti Z 1 F i M 3 ;
140
Q. Zhu et al. / Neurocomputing 136 (2014) 136–151
Λ7;8 ¼ s22 eβ0 s2 F Ti R5 Gi þ τ22 ðτ2 τ1 Þeβ0 τ2 F Ti R6 Gi
þ ðτ2 τ1 Þ2 eβ0 ðτ2 τ1 Þ F Ti R7 Gi k1 F Ti S1 Gi k3 F Ti S3 Gi k6 F Ti S6 Gi þ τ2 eβ0 τ2 F Ti Z 1 Gi ; Λ7;13 ¼ F Ti P 2i ;
Λ7;17 ¼ s22 eβ0 s2 F Ti R5 Ei τ22 ðτ2 τ1 Þeβ0 τ2 F Ti R6 Ei ðτ2 τ1 Þ2 eβ0 ðτ2 τ1 Þ F Ti R7 Ei
þ k1 F Ti S1 Ei þ k3 F Ti S3 Ei þ k6 F Ti S6 Ei τ2 eβ0 τ2 F Ti Z 1 Ei ; Λ7;24 ¼ F Ti P 2i Ei ; Λ8;8 ¼ e β0 δ2 R22 þ s22 eβ0 s2 GTi R5 Gi þ τ22 ðτ2 τ1 Þeβ0 τ2 GTi R6 Gi
þ ðτ2 τ1 Þ2 eβ0 ðτ2 τ1 Þ GTi R7 Gi k1 GTi S1 Gi k3 GTi S3 Gi k6 GTi S6 Gi þ τ2 eβ0 τ2 GTi Z 1 Gi M 4 ; Λ8;13 ¼ GTi P 2i ;
Λ8;17 ¼ s22 eβ0 s2 GTi R5 Ei τ22 ðτ2 τ1 Þeβ0 τ2 GTi R6 Ei ðτ2 τ1 Þ2 eβ0 ðτ2 τ1 Þ GTi R7 Ei þ k1 GTi S1 Ei þ k3 GTi S3 Ei þ k6 GTi S6 Ei τ2 eβ0 τ2 GTi Z 1 Ei ;
Λ9;9 ¼ 4eβ0 δ2 S2 ;
Λ8;24 ¼ GTi P 2i Ei ;
Λ12;19 ¼ C Ti P 1i Ai ;
Λ10;10 ¼ 4eβ0 ðδ2 δ1 Þ S4 ;
Λ12;20 ¼ C Ti P 1i Bi ;
N
Λ11;11 ¼ 4eβ0 s1 S5 ;
Λ12;12 ¼ β0 C Ti P 1i C i þ ∑ δij C Tj P 1j C j Q 4 ;
j¼1
N
Λ13;13 ¼ β0 P 2i þ ∑ δij P 2j P 2i Ei ETi P 2i þ δ2 X 2 þQ 1 þ Q 2
j¼1
þ R3 þ Q 11 þ s22 eβ0 s2 R4 R5 ðτ2 τ1 ÞR6 4τ22 eβ0 τ2 S1 4ðτ2 τ1 Þ2 eβ0 ðτ2 τ1 Þ S3
4s22 eβ0 s2 S6 þ U 1 M 1 U 1
Λ13;18 ¼
Λ13;14 ¼ R5 ; Λ13;16 ¼ ðτ2 τ1 ÞR6 ;
Λ13;21 ¼ 4eβ0 τ2 τ2 S1 ; Λ13;22 ¼ 4ðτ2 τ1 Þeβ0 ðτ2 τ1 Þ S3 ;
þ ðO1 þ OT1 Þ;
Λ13;19 ¼ Q 12 ;
O1 þ OT2 ;
N
Λ13;23 ¼ 4eβ0 s2 s2 S6 ;
Λ13;24 ¼ β0 P 2i Ei ∑ δij P 2j Ej þETi P 2i Ei ;
j¼1
Λ13;26 ¼ O1 þ OT3 ; Λ14;14 ¼ R5 ; Λ15;15 ¼ e β0 τ1 Q 1 e β0 τ1 R7 ; Λ15;16 ¼ e β0 τ1 R7 ;
Λ16;16 ¼ e β0 τ2 Q 2 e β0 τ1 R7 ðτ2 τ1 ÞR6 ; Λ17;17 ¼ W i sμ2 e β0 s2 R3 ð1 sμ2 Þ þ s22 eβ0 s2 ETi R5 Ei
þ τ22 ðτ2 τ1 Þeβ0 τ2 ETi R6 Ei þ ðτ2 τ1 Þ2 eβ0 ðτ2 τ1 Þ ETi R7 Ei k1 ETi S1 Ei k3 ETi S3 Ei k6 ETi S6 Ei
þ τ2 eβ0 τ2 ETi Z 1 Ei ;
Λ18;18 ¼ e β0 τ2 Q 11 þ U 2 M 2 U 2 ðO2 þ OT2 Þ;
Λ17;24 ¼ sμ2 ETi P 2i Ei ;
Λ18;20 ¼ e β0 τ2 Q 12 ; Λ18;26 ¼ O2 OT3 ; Λ19;19 ¼ Q 22 þ s21 eβ0 s1 ATi Q 5 Ai þ δ22 ðδ2 δ1 Þeβ0 δ2 ATi Q 6 Ai
þ ðδ2 δ1 Þ2 eβ0 ðδ2 δ1 Þ ATi Q 7 Ai k2 ATi S2 Ai k4 ATi S4 Ai k5 ATi S5 Ai þ δ2 eβ0 δ2 ATi Z 2 Ai M 1 ;
Λ19;20 ¼ s21 eβ0 s1 ATi Q 5 Bi þ δ22 ðδ2 δ1 Þeβ0 δ2 ATi Q 6 Bi þ ðδ2 δ1 Þ2 eβ0 ðδ2 δ1 Þ ATi Q 7 Bi
k2 ATi S2 Bi k4 ATi S4 Bi k5 ATi S5 Bi þ δ2 eβ0 δ2 ATi Z 2 Bi ;
þ ðδ 2 δ
2
β0 τ2
Q 22 þ 21 eβ0 s1 BTi Q 5 Bi þ 2 ð 2 1 Þeβ0 δ2 BTi Q 6 Bi
20;20 ¼ e
T
T
T
T
2 β0 ðδ2 δ1 Þ T
β0 τ2 S ;
Bi Q 7 Bi k2 Bi S2 Bi k4 Bi S4 Bi k5 Bi S5 Bi þ 2 eβ0 δ2 Bi Z 2 Bi M 2 ;
1Þ e
21;21 ¼ 4e
1
Λ25;25 ¼ ðN3 þN T3 Þ;
k2 ¼ 2δ2 e2β0 δ2
2
τ2
þ
β0
δ2
β0
Λ23;23 ¼ 4eβ0 s2 S6 ;
1
β
þ
δ
N
Λ24;24 ¼ β0 ETi P 2i Ei þ ∑ δij ETj P 2j Ej R4 ;
j¼1
!
2
0
1
2τ22 eβ0 τ2
!
;
2δ2 eβ0 δ2
!
2
β20
β20
!
β20
k3 ¼ 2ðτ22 τ21 Þeβ0 τ2
;
s2
β0
þ
1
!
β20
U j ¼ diagðu~ j1 ; u~ j2 ; …; u~ jn Þ;
!
τ1 1
1 τ2
þ 2ðτ22 τ21 Þe2β0 τ2 β0 τ1
;
β0 β20
β20 β0
!
δ1 1
1 δ2
2
2
þ 2ðδ2 δ1 Þe2β0 δ2 β0 δ1
;
β0 β20
β20 β0
2
k6 ¼ 2s22 e2β0 s2
δ δ
Λ
Λ26;26 ¼ ðO3 þ OT3 Þ;
k4 ¼ 2ðδ2 δ1 Þeβ0 δ2
2
s
δ
Λ22;22 ¼ 4eβ0 ðτ2 τ1 Þ S3 ;
k1 ¼ 2τ22 e2β0 τ2
Λ
2s22 eβ0 s2
β20
k5 ¼ 2s21 e2β0 s1
2
;
pffiffiffiffiffiffiffiffi
W 1i ¼ 4 sμ1 P 1i C i
3T
0…0 5 ;
|ffl{zffl}
s1
2
β0
þ
1
β
2
0
W 2i ¼ 4 0…0
|ffl{zffl}
25 elements
12 elements
!
2s21 eβ0 s1
pffiffiffiffiffiffiffiffi
β20
sμ2 P 2i Ei
;
3T
0…0 5 ;
|ffl{zffl}
13 elements
u~ ji ¼ maxfju~ ji j; jujiþ jg;
V j ¼ diagðv~ j1 ; v~ j2 ; …; v~ jn Þ;
v~ ji ¼ maxfjv~ ji j; jv~ jiþ jg
and the remaining terms are zero:
Proof. We rewrite system (2) as
8
_ ¼ CðrðtÞÞxðt s1 ðtÞÞ þ AðrðtÞÞH ð1Þ ðyðtÞÞyðtÞ þ BðrðtÞÞH ð2Þ ðyðt τðtÞÞÞyðt τðtÞ;
>
xðtÞ
>
>
R
>
>
< Δxðt k Þ ¼ xðt k Þ xðt k Þ ¼ Dk ðrðtÞÞfxðt k Þ CðrðtÞÞ ttk s ðt Þ xðsÞ dsg;
1 k
k
kAZþ ;
>
_ ¼ EðrðtÞÞyðt s2 ðtÞÞ þ FðrðtÞÞJ ð1Þ ðxðtÞÞxðtÞ þ GðrðtÞÞJ ð2Þ ðxðt δðtÞÞÞxðt δðtÞ;
yðtÞ
>
>
R
>
>
: Δyðt k Þ ¼ yðt k Þ yðt k Þ ¼ H k ðrðtÞÞfyðt k Þ EðrðtÞÞ tk
yðsÞ dsg;
t a tk ;
kAZþ ;
t k s2 ðt k Þ
ðlÞ
ðlÞ
t a tk ;
ð8Þ
ðlÞ
ðlÞ
ðlÞ
T
where H ðlÞ ðyðtÞÞ ¼ diagðh1 ðy1 ðtÞÞ; h2 ðy2 ðtÞÞ; …; hn ðyn ðtÞÞÞT ðl ¼ 1; 2Þ and J ðlÞ ðxðtÞÞ ¼ diagðjðlÞ
1 ðx1 ðtÞÞ; j2 ðx2 ðtÞÞ; …; jn ðxn ðtÞÞÞ ðl ¼ 1; 2Þ such that for
l ¼ 1; 2 and i ¼ 1; 2; …; n,
U iðl
Þ
r H ðlÞ
i ðyi ðtÞÞ ¼
ðlÞ
þ
f i ðyi ðtÞÞ
rU iðl Þ ;
yi ðtÞ
ð9Þ
Q. Zhu et al. / Neurocomputing 136 (2014) 136–151
V iðl
Þ
rJ ðlÞ
i ðxi ðtÞÞ ¼
141
þ
g ðlÞ
i ðxi ðtÞÞ
r V iðl Þ :
xi ðtÞ
ð10Þ
Rt
Rt
_
_ ds, then (8) becomes
ds and xðt δðtÞÞ ¼ xðtÞ t δðtÞ xðsÞ
Since yðt τðtÞÞ ¼ yðtÞ t τðtÞ yðsÞ
8
_ ¼ CðrðtÞÞxðt s1 ðtÞÞ þ AðrðtÞÞH ð1Þ ðyðtÞÞðyðtÞÞ þ BðrðtÞÞH ð2Þ ðyðt τðtÞÞÞyðtÞ
>
xðtÞ
>
>
Rt
>
>
_
>
ds;
t at k ;
BðrðtÞÞH ð2Þ ðyðt τðtÞÞ t τðtÞ yðsÞ
>
>
>
n
o
>
R tk
>
>
< Δxðt k Þ ¼ xðt k Þ xðt k Þ ¼ Dk ðrðtÞÞ xðt k Þ CðrðtÞÞ t s ðt Þ xðsÞ ds ;
kAZþ ;
1
k
k
>
_ ¼ EðrðtÞÞyðt s2 ðtÞÞ þ FðrðtÞÞJ ð1Þ ðxðtÞÞxðtÞ þ GðrðtÞÞJ ð2Þ ðxðt δðtÞÞÞxðtÞ
yðtÞ
>
>
>
Rt
>
>
_ ds;
>
GðrðtÞÞJ ð2Þ ðxðt δðtÞÞ t δðtÞ xðsÞ
>
>
>
>
> Δyðt Þ ¼ yðt Þ yðt Þ ¼ H ðrðtÞÞfyðt Þ EðrðtÞÞ R tk
:
k
k
k
k
k
t k s2 ðt k Þ yðsÞ dsg;
ð11Þ
t at k ;
kAZþ :
Obviously, system (11) is equivalent to the following:
8 h
i
Rt
d
>
>
xðtÞ CðrðtÞÞ t s1 ðtÞ xðsÞ ds ¼ CðrðtÞÞxðtÞ CðrðtÞÞxðt s1 ðtÞÞs_1 ðtÞ þ AðrðtÞÞH ð1Þ ðyðtÞÞyðtÞ
>
>
dt
>
>
Rt
>
>
>
_
þ BðrðtÞÞH ð2Þ ðyðt τðtÞÞÞyðtÞ BðrðtÞÞH ð2Þ ðyðt τðtÞÞÞ t τðtÞ yðsÞ
ds;
>
>
>
n
o
>
R
>
t
>
>
< Δxðt k Þ ¼ xðt k Þ xðt k Þ ¼ Dk ðrðtÞÞ xðt k Þ CðrðtÞÞ t kk s1 ðtk Þ xðsÞ ds ;
i
Rt
d h
>
>
>
yðtÞ EðrðtÞÞ t s2 ðtÞ yðsÞ ds ¼ EðrðtÞÞyðtÞ EðrðtÞÞyðt s2 ðtÞÞs_2 ðtÞ þFðrðtÞÞJ ð1Þ ðxðtÞÞxðtÞ
>
>
dt
>
>
Rt
>
>
_ ds;
>
þ GðrðtÞÞJ ð2Þ ðxðt δðtÞÞÞxðtÞ GðrðtÞÞJ ð2Þ ðxðt δðtÞÞÞ t δðtÞ xðsÞ
>
>
>
n
o
>
R
>
>
: Δyðt k Þ ¼ yðt k Þ yðt k Þ ¼ H k ðrðtÞÞ yðt k Þ EðrðtÞÞ tt k s ðt Þ yðsÞ ds ;
2
k
t a tk ;
kAZþ ;
ð12Þ
t a tk ;
kAZþ :
k
Now, let us consider the following Lyapunov–Krasovskii functional:
Vðt; xðtÞ; yðtÞ; iÞ ¼ V 1 ðt; xðtÞ; yðtÞ; iÞ þ V 2 ðt; xðtÞ; yðtÞ; iÞ þ V 3 ðt; xðtÞ; yðtÞ; iÞ þ V 4 ðt; xðtÞ; yðtÞ; iÞ þ V 5 ðt; xðtÞ; yðtÞ; iÞ;
ð13Þ
where
Z
V 1 ðt; xðtÞ; yðtÞ; iÞ ¼ eβ0 t xðtÞ C i
Z
V 2 ðt; xðtÞ; yðtÞ; iÞ ¼
t
Z
t
Z
"
t
t δðtÞ
þ s2
þ s2
Z
gðxðsÞÞ
Z
t
t
t s2 θ
Z 0 Z t
s2
þðτ2 τ1 Þ
Z
Z
τ2
Z
V 5 ðt; xðtÞ; yðtÞ; iÞ ¼
0
τ2
Z
Z
θ
τ1
Z
s1
t
t
t
t s2 ðtÞ
R11
R12
n
R22
t
Z
eβ0 s yT ðsÞR3 yðsÞ ds þ
#"
#
xðsÞ
ds þ s1
gðxðsÞÞ
"
t
t
Z
t
t s1
Z
t
θ
τ2
θ
0Z t
t þλ
0
Z
0
τ2
#T "
yðsÞ
Z
t
t δ1
t
t s2 ðtÞ
yðsÞ ds ;
eβ0 s xT ðsÞR1 xðsÞ ds
Q 11
Q 12
n
Q 22
f ðyðsÞ
#"
yðsÞ
#
f ðyðsÞÞ
ds
eβ0 ðs þ s1 Þ xT ðsÞQ 4 xðsÞ ds dθ
Z
Z
0
s1
t
t þθ
Z
_
eβ0 ðs þ τ2 τ1 Þ y_ T ðsÞR7 yðsÞ
ds dθ þ ðδ2 δ1 Þ
_
eβ0 ðs θ þ τ2 Þ y_ T ðsÞS1 yðsÞ
ds dλ dθ þ2δ2
Z
eβ0 s
t τðtÞ
Z
t s2 ðtÞ
T Z
yðsÞ ds P 2i yðtÞ Ei
eβ0 s xT ðsÞQ 3 xðsÞ ds þ
t s1 ðtÞ
2
τ1
θ
Z
eβ0 s yT ðsÞQ 2 yðsÞ ds þ
Z
t
t
_ ds dθ
eβ0 ðs þ s1 Þ x_ T ðsÞQ 5 xðsÞ
tþθ
_
ds dθ
eβ0 ðs þ τ2 Þ y_ T ðsÞR6 yðsÞ
_ ds dθ
eβ0 ðs þ δ2 Þ x_ T ðsÞQ 6 xðsÞ
tþθ
tþθ
tþλ
Z
Z
0
tþθ
Z
0
τ2
0Z t
þ2ðτ22 τ21 Þ
þ2s21
t τ2
t s1 ðtÞ
_
ds dθ þ τ2 ðτ2 τ1 Þ
eβ0 ðs þ s2 Þ y_ T ðsÞR5 yðsÞ
δ2
Z
t
Z
xðsÞ ds þ eβ0 t yðtÞ Ei
t
eβ0 ðs þ s2 Þ yT ðsÞR4 yðsÞ ds dθ; V 3 ðt; xðtÞ; yðtÞ; iÞ ¼ s1
tþθ
Z
0
#T "
xðsÞ
eβ0 s
þ δ 2 ðδ 2 δ 1 Þ
V 4 ðt; xðtÞ; yðtÞ; iÞ ¼ 2τ22
Z
eβ0 s xT ðsÞR2 xðsÞ ds þ
t δ2
þ
t s1 ðtÞ
T Z
xðsÞ ds P 1i xðtÞ C i
eβ0 s yT ðsÞQ 1 yðsÞ ds þ
t τ1
þ
t
Z
t
tþλ
Z
Z
0
δ2
δ1
Z
δ2
0Z t
θ
t
t þθ
tþλ
_ ds dθ;
eβ0 ðs þ δ2 δ1 Þ x_ T ðsÞQ 7 xðsÞ
_ ds dλ dθ
eβ0 ðs θ þ δ2 Þ x_ T ðsÞS2 xðsÞ
_
ds dλ dθ þ 2ðδ2 δ1 Þ
eβ0 ðs θ þ τ2 τ1 Þ y_ T ðsÞS3 yðsÞ
2
_ ds dλ dθ þ2s22
eβ0 ðs θ þ s1 Þ x_ T ðsÞS5 xðsÞ
_
ds dθ þ
eβ0 ðs þ τ2 Þ y_ T ðsÞZ 1 yðsÞ
Z
0
δ2
Z
t
tþθ
Z
0
s2
Z
0
θ
Z
2
t
tþλ
Z
δ1
δ2
Z
0
θ
Z
t
tþλ
_ ds dλ dθ
eβ0 ðs θ þ δ2 δ1 Þ x_ T ðsÞS4 xðsÞ
_
ds dλ dθ;
eβ0 ðs θ þ s2 Þ y_ T ðsÞS6 yðsÞ
_ ds dθ:
eβ0 ðs þ δ2 Þ x_ T ðsÞZ 2 xðsÞ
By (13) and using a direct computation, we obtain
LVðt; xðtÞ; yðtÞ; iÞ ¼ LV 1 ðt; xðtÞ; yðtÞ; iÞ þ LV 2 ðt; xðtÞ; yðtÞ; iÞ þLV 3 ðt; xðtÞ; yðtÞ; iÞ
þLV 4 ðt; xðtÞ; yðtÞ; iÞ þ LV 5 ðt; xðtÞ; yðtÞ; iÞ;
ð14Þ
142
Q. Zhu et al. / Neurocomputing 136 (2014) 136–151
where
Z
LV 1 ðt; xðtÞ; yðtÞ; iÞ ¼ β0 eβ0 t xðtÞ C i
t
t s1 ðtÞ
T Z
xðsÞ ds P 1i xðtÞ C i
Z
þ ∑ δij eβ0 t xðtÞ C j
N
j¼1
Z
þ 2eβ0 t xðtÞ C i
t s1 ðtÞ
xðsÞ ds
xðsÞ ds
t s1 ðtÞ
Z
P 1j xðtÞ C j
t
t s1 ðtÞ
xðsÞ ds
T
T Z t
Z t
Z t
d
xðtÞ C i
xðsÞ ds P 1i
xðsÞ ds þ β 0 eβ0 t yðtÞ Ei
yðsÞ ds P 2i yðtÞ Ei
yðsÞ ds
dt
t s1 ðtÞ
t s1 ðtÞ
t s2 ðtÞ
t s2 ðtÞ
t
Z
N
þ ∑ δij eβ0 t yðtÞ Ej
t
t s2 ðtÞ
j¼1
Z
þ 2eβ0 t yðtÞ Ei
T
t
t
T Z
yðsÞ ds P 2j yðtÞ Ej
t s2 ðtÞ
yðsÞ ds
Z t
d
yðtÞ Ei
yðsÞ ds P 2i
yðsÞ ds
dt
t s2 ðtÞ
t s2 ðtÞ
Z t
Z t
xðsÞ ds eβ0 t
r eβ0 t xT ðtÞβ 0 P 1i xðtÞ eβ0 t xT ðtÞβ 0 P 1i C i
þ eβ0 t
eβ0 t
Z
t
t s1 ðtÞ
Z
t
t s1 ðtÞ
t
T
t
T
Z
xðsÞ ds β0 C Ti P 1i C i
t s1 ðtÞ
T
t s1 ðtÞ
xðsÞ ds
β0 C Ti P 1i xðtÞ
Z
N
N
xðsÞ ds þ eβ0 t xT ðtÞ ∑ δij P 1j xðtÞ eβ0 t xT ðtÞ ∑ δij P 1j C j
t
t s1 ðtÞ
T N
Z
xðsÞ ds
∑ δij C Tj P 1j xðtÞ þ eβ0 t
j¼1
j¼1
T
t
t s1 ðtÞ
N
∑ δij C Tj P 1j C j
xðsÞ ds
j¼1
Z
t
t s1 ðtÞ
j¼1
xðsÞ ds
t
t s1 ðtÞ
eβ0 t xT ðtÞ2P 1i C i xðtÞ þ eβ0 t xT ðtÞP 1i C i T i 1 C Ti P 1i xðtÞsμ1 þ eβ0 t xT ðt s1 ðtÞÞT i xðt s1 ðtÞÞsμ1
þ 2eβ0 t xT ðtÞP 1i Ai f
þ 2eβ0 t
2eβ0 t
Z
Z
t
t s1 ðtÞ
t
t s1 ðtÞ
ð1Þ
Z
ðyðtÞÞ þ 2eβ0 t xT ðtÞP 1i Bi H ð2Þ ðyðt τðtÞÞÞyðtÞ 2eβ0 t xT ðtÞP 1i Bi H ð2Þ ðyðt τðtÞÞÞ
T
Z
xðsÞ ds C Ti P 1i C i xðtÞ þ 2eβ0 t
þ eβ0 t yT ðtÞβ0 P 2i yðtÞ eβ0 t yT ðtÞβ0 P 2i Ei
þ eβ0 t
eβ0 t
Z
t
t s2 ðtÞ
Z
t
t s2 ðtÞ
t s1 ðtÞ
T
Z
ð1Þ
xðsÞ ds C Ti P 1i Ai f ðyðtÞÞ 2eβ0 t
T
Z
yðsÞ ds β 0 ETi P 2i Ei
Z
t s2 ðtÞ
t s2 ðtÞ
t
t s1 ðtÞ
t
t τðtÞ
T
t
t s2 ðtÞ
yðsÞ ds
β0 ETi P 2i yðtÞ
Z
N
N
yðsÞ ds þ eβ0 t yT ðtÞ ∑ δij P 2j yðtÞ eβ0 t yT ðtÞ ∑ δij P 2j Ej
T N
Z
yðsÞ ds
∑ δij ETj P 2j yðtÞ þ eβ0 t
j¼1
j¼1
T
t
t s2 ðtÞ
yðsÞ ds
N
∑ δij ETj P 2j Ej
j¼1
j¼1
Z
t
t s2 ðtÞ
yðsÞ ds
eβ0 t yT ðtÞ2P 2i Ei yðtÞ þeβ0 t yT ðtÞP 2i Ei W i 1 ETi P 2i yðtÞsμ2 þ eβ0 t yT ðt s2 ðtÞÞW i yðt s2 ðtÞÞsμ2
Z
þ 2eβ0 t yT ðtÞP 2i F i g ð1Þ ðxðtÞÞ þ 2eβ0 t yT ðtÞP 2i Gi J ð2Þ ðxðt δðtÞÞÞxðtÞ 2eβ0 t yT ðtÞP 2i Gi J ð2Þ ðxðt δðtÞÞÞ
þ 2eβ0 t
2eβ0 t
Z
Z
T
t
t s2 ðtÞ
yðsÞ ds
T
t
t s2 ðtÞ
yðsÞ ds
_
yðsÞ
ds
T
ð2Þ
xðsÞ ds C Ti P 1i Bi f ðyðt τðtÞÞÞ
Z
yðsÞ ds eβ0 t
t
t
T
xðsÞ ds C Ti P 1i C i sμ1 xðt s1 ðtÞÞ
t
ETi P 2i Ei yðtÞ þ2eβ0 t
Z
t
t s2 ðtÞ
ETi P 2i F i g ð1Þ ðxðtÞÞ 2eβ0 t
Z
T
yðsÞ ds ETi P 2i Ei sμ2 yðt s2 ðtÞÞ
t
t s2 ðtÞ
xðsÞ ds
t
t δðtÞ
t
t s2 ðtÞ
yðsÞ ds
_ ds
xðsÞ
T
yðsÞ ds ETi P 2i Gi g ð2Þ ðxðt δðtÞÞÞ;
ð15Þ
LV 2 ðt; xðtÞ; yðtÞ; iÞ r eβ0 t yT ðtÞQ 1 yðtÞ eβ0 t yT ðt τ1 Þe β0 τ1 Q 1 yðt τ1 Þ
þeβ0 t yT ðtÞQ 2 yðtÞ eβ0 t yT ðt τ2 Þe β0 τ2 Q 2 yðt τ2 Þ þ eβ0 t xT ðtÞQ 3 xðtÞ
eβ0 t xT ðt s1 ðtÞÞe β0 s1 Q 3 ð1 sμ1 Þxðt s1 ðtÞÞ þ eβ0 t xT ðtÞR1 xðtÞ
eβ0 t xT ðt δ1 Þe β0 δ1 R1 xðt δ1 Þ þ eβ0 t xT ðtÞR2 xðtÞ eβ0 t xT ðt δ2 Þe β0 δ2 R2 xðt δ2 Þ þ eβ0 t yT ðtÞR3 yðtÞ
eβ0 t yT ðt s2 ðtÞÞe β0 s2 R3 ð1 sμ2 Þyðt s2 ðtÞÞ þ eβ0 t yT ðtÞQ 11 yðtÞ
þeβ0 t f ðyðtÞÞQ T12 yðtÞ þ eβ0 t yT ðtÞQ 12 f ðyðtÞÞ þ eβ0 t f ðyðtÞÞQ 22 f ðyðtÞÞ
T
T
eβ0 t yT ðt τðtÞÞe β0 τ2 Q 11 yðt τðtÞÞ eβ0 t f ðyðt τðtÞÞÞe β0 τ2 Q T12 yðt τðtÞÞ
T
eβ0 t yT ðt τðtÞÞe β0 τ2 Q 12 f ðyðt τðtÞÞÞ eβ0 t f ðyðt τðtÞÞÞe β0 τ2 Q 22 f ðyðt τ ðtÞÞÞ þeβ0 t xT ðtÞR11 xðtÞ
T
þeβ0 t g T ðxðtÞÞRT12 xðtÞ þ eβ0 t xT ðtÞR12 gðxðtÞÞ þ eβ0 t g T ðxðtÞÞR22 gðxðtÞÞ
eβ0 t xT ðt δðtÞÞe β0 δ2 R11 xðt δðtÞÞ eβ0 t g T ðxðt δðtÞÞÞe β0 δ2 RT12 xðt δðtÞÞ
eβ0 t xT ðt δðtÞÞÞe β0 δ2 R12 gðxðt δðtÞÞÞ eβ0 t g T ðxðt δðtÞÞÞe β0 δ2 R22 gðxðt δðtÞÞÞþ eβ0 t xT ðtÞs21 Q 4 eβ0 s1 xðtÞ
Z t
T Z t
Z t
T Z t
eβ0 t
xðsÞ ds Q 4
xðsÞ ds þ eβ0 t yT ðtÞs22 R4 eβ0 s2 yðtÞ eβ0 t
yðsÞ ds R4
yðsÞ ds ;
t s1 ðtÞ
t s1 ðtÞ
t s2 ðtÞ
t s2 ðtÞ
ð16Þ
Q. Zhu et al. / Neurocomputing 136 (2014) 136–151
_ s1 eβ0 t
LV 3 ðt; xðtÞ; yðtÞ; iÞ ¼ s21 eβ0 ðt þ s1 Þ x_ T ðtÞQ 5 xðtÞ
Z
t
_ ds þ s22 eβ0 ðt þ s2 Þ y_ T ðtÞR5 yðtÞ
_ s2 e β 0 t
x_ T ðsÞQ 5 xðsÞ
t s1
Z
_ τ2 ðτ2 τ1 Þeβ0 t
þ τ22 ðτ2 τ1 Þeβ0 ðt þ τ2 Þ y_ T ðtÞR6 yðtÞ
δ2 ðδ2 δ1 Þeβ0 t
Z
t
t δ2
ðτ2 τ1 Þeβ0 ðt τ1 Þ
Z
t
t τ2
t τ1
t τ2
Z
2
Z
0
t
τ2
Z 0
tþθ
Z t
δ2
tþθ
_ 2ðτ22 τ21 Þeβ0 ðt þ τ2 τ1 Þ
eβ0 t k3 y_ T ðtÞS3 yðtÞ
2
2
_ 2s21 eβ0 ðt þ s1 Þ
eβ0 t k5 x_ T ðtÞS5 xðtÞ
_ 2s22 eβ0 ðt þ s2 Þ
eβ0 t k6 y_ T ðtÞS6 yðtÞ
_ eβ0 t
LV 5 ðt; xðtÞ; yðtÞ; iÞ ¼ τ2 eβ0 ðt þ τ2 Þ y_ T ðtÞZ 1 yðtÞ
2
_
_
y_ T ðsÞR6 yðsÞ
ds þ δ2 ðδ2 δ1 Þeβ0 ðt þ δ2 Þ x_ T ðtÞQ 6 xðtÞ
Z
t
t τ2
Z
Z
0
Z
τ1
τ2
Z
δ1
t þθ
Z
δ2
t
s1
Z 0
t þθ
Z t
s2
tþθ
t
ðxðt δðtÞÞÞ
t δðtÞ
_ ds r
xðsÞ
Z
"
t
t
t þθ
t δðtÞ
"
#T
t δ1
t δ2
_ ds;
x_ T ðsÞQ 7 xðsÞ
ð17Þ
_ ds dθ
x_ T ðsÞS4 xðsÞ
_ ds dθ
x_ T ðsÞS5 xðsÞ
_
y_ T ðsÞS6 yðsÞ
ds dθ;
ð18Þ
_
_ eβ0 t
y_ T ðsÞZ 1 yðsÞ
ds þ δ2 eβ0 ðt þ δ2 Þ x_ T ðtÞZ 2 xðtÞ
yðtÞ
_
xðsÞ
Z
_
y_ T ðsÞS3 yðsÞ
ds dθ
Y 1 P 1i Bi H ð2Þ ðyðt τðtÞÞÞ
#"
Z1
þ xT ðtÞðY T1 þ Y 1 ÞyðtÞ xT ðtÞðY T1 þY 1 Þyðt τðtÞÞ 2xT ðtÞP 1i Bi H ð2Þ ðyðt τðtÞÞÞyðtÞ þ 2xT ðtÞP 1i Bi f
2y ðtÞP 2i Gi J
_
y_ T ðsÞR5 yðsÞ
ds
_ ds dθ
x_ T ðsÞS2 xðsÞ
On the other hand, it follows from Lemma 2.5 that
"
#T "
Z t
Z t
xðtÞ
X1
_
yðsÞ
ds r
2xT ðtÞP 1i Bi H ð2Þ ðyðt τðtÞÞÞ
_
yðsÞ
n
t τ ðtÞ
t τ ðtÞ
t
t s2
_
y_ T ðsÞS1 yðsÞ
ds dθ
Z
_ 2ðδ2 δ1 Þeβ0 ðt þ δ2 δ1 Þ
eβ0 t k4 x_ T ðtÞS4 xðtÞ
Z
t
_
_ ðδ2 δ1 Þeβ0 ðt δ1 Þ
y_ T ðsÞR7 yðsÞ
ds þ ðδ2 δ1 Þ2 eβ0 ðt þ δ2 δ1 Þ x_ T ðtÞQ 7 xðtÞ
_ 2δ2 eβ0 ðt þ δ2 Þ
eβ0 t k2 x_ T ðtÞS2 xðtÞ
ð2Þ
Z
_ ds þ ðτ2 τ1 Þ2 eβ0 ðt þ τ2 τ1 Þ y_ T ðtÞR7 yðtÞ
_
x_ T ðsÞQ 6 xðsÞ
_ 2τ22 eβ0 ðt þ τ2 Þ
LV 4 ðt; xðtÞ; yðtÞ; iÞ ¼ eβ0 t k1 y_ T ðtÞS1 yðtÞ
T
143
X2
Y 2 P 2i Gi J ð2Þ ðxðt δðtÞÞÞ
n
Z2
r yT ðtÞδ2 X 2 xðtÞ þ yT ðtÞðY T2 þ Y 2 ÞxðtÞ yT ðtÞðY T2 þ Y 2 Þxðt δðtÞÞ
2yT ðtÞP 2i Gi J ð2Þ ðxðt δðtÞÞÞxðtÞ þ2yT ðtÞP 2i Gi g ð2Þ ðxðt δðtÞÞÞ þ
Z
t
t δ2
Z
xðtÞ
_
yðsÞ
ð2Þ
t
t δ2
_ ds:
x_ T ðsÞZ 2 xðsÞ
#
ds rxT ðtÞτ 2 X 1 xðtÞ
ðyðt τðtÞÞÞ þ
#"
ð19Þ
yðtÞ
_
xðsÞ
Z
t
t τ2
_
y_ T ðsÞZ 1 yðsÞ
ds;
ð20Þ
#
ds
_ ds:
x_ T ðsÞZ 2 xðsÞ
ð21Þ
By Assumption 2.1, we get
f
ð1ÞT
g
ðyðtÞÞM 1 f
ð1ÞT
ð1Þ
ðyðtÞÞ ryT ðtÞU 1 M 1 U 1 yðtÞ; f
ð1Þ
ð2ÞT
T
ðxðtÞÞM 3 g ðxðtÞÞ r x ðtÞV 1 M 3 V 1 xðtÞ; g
ð2ÞT
ðyðt τ ðtÞÞÞM 2 f
ð2Þ
ðyðt τðtÞÞÞ r yT ðt τðtÞÞU 2 M 2 U 2 yðt τðtÞÞ;
ðxðt δðtÞÞÞM 4 g ðxðt δðtÞÞÞ r xT ðt δðtÞÞV 2 M 4 V 2 xðt δðtÞÞ:
ð2Þ
ð22Þ
Letting
z1 ðtÞ ¼ ½ CðrðtÞÞxðt s1 ðtÞÞ þ AðrðtÞÞH ð1Þ ðyðtÞÞyðtÞ þ BðrðtÞÞH ð2Þ ðyðt τðtÞÞÞyðt τðtÞÞ;
z2 ðtÞ ¼ ½ EðrðtÞÞyðt s2 ðtÞÞ þ FðrðtÞÞJ ð1Þ ðxðtÞÞxðtÞ þ GðrðtÞÞJ ð2Þ ðxðt δðtÞÞÞxðt δðtÞÞ:
Then it follows from (2) that
_ ¼ z1 ðtÞ
xðtÞ
_ ¼ z2 ðtÞ;
yðtÞ
and
and so
xðtÞ xðt δðtÞÞ Z
t
t δðtÞ
z1 ðsÞ ds ¼ 0; yðtÞ yðt τðtÞÞ Z
t
t τðtÞ
z2 ðsÞ ds ¼ 0:
Hence, for any matrices N 1 , N 2 , N 3 , O1 , O2 , O3 with appropriate dimensions, we have
"
Z t
T #
Z t
T
T
0 ¼ 2x ðtÞN 1 þ 2x ðt δðtÞÞN 2 þ 2
z1 ðsÞ ds N3 ½xðtÞ xðt δðtÞÞ z1 ðsÞ ds;
t δðtÞ
"
0 ¼ 2yT ðtÞO1 þ 2yT ðt τðtÞÞO2 þ 2
Z
t
t τ ðtÞ
t δðtÞ
T
z2 ðsÞ ds
#
O3 ½yðtÞ yðt τðtÞÞ Z
t
t τðtÞ
z2 ðsÞ ds:
ð23Þ
ð24Þ
By (14)–(24) and applying Lemma 2.7, we easily obtain
ELVðt; xðtÞ; yðtÞ; iÞ r Eeβ0 t Ψ ðtÞ½Π i þ W T1i T i 1 W 1i þ W T2i W i 1 W 2i Ψ ðtÞ;
T
ð25Þ
144
Q. Zhu et al. / Neurocomputing 136 (2014) 136–151
where Ψ ðtÞ ¼ ½Ψ 1 ðtÞ Ψ 2 ðtÞ Ψ 3 ðtÞT , and
2
T
T
T
Z
Ψ 1 ðtÞ ¼ 4xT ðtÞ xT ðt s1 Þ xT ðt δ1 Þ xT ðt δ2 Þ xT ðt s1 ðtÞÞ:xT ðt δðtÞÞ g ð1ÞT ðxðtÞÞ gð2ÞT ðxðt δðtÞÞÞ
Z
t δ1
t δ2
!T Z
xðsÞ ds
yT ðt τðtÞÞ f
Ψ 3 ðtÞ ¼
"Z
t
t δðtÞ
ð1ÞT
t
t s1
ðyðtÞÞ f
T Z
z1 ðsÞ ds
ð2ÞT
T Z
xðsÞ ds
t s1 ðtÞ
Z
ðyðt τðtÞÞÞ
t τ ðtÞ
z2 ðsÞ ds
T Z
yðsÞ ds
t δ2
T
xðsÞ ds
5;
xðsÞ ds
t
t τ2
T #
t
3
T
t
t
Ψ 2 ðtÞ ¼ yT ðtÞ yT ðt s2 Þ yT ðt τ1 Þ yT ðt τ2 Þ yT ðt s2 ðtÞÞ
t τ1
t τ2
T Z
yðsÞ ds
T Z
t
t s2
yðsÞ ds
t
t s2 ðtÞ
T #
yðsÞ ds
;
:
Clearly, it follows from (3) that
ELVðt; xðtÞ; yðtÞ; iÞ r 0;
t A ½t k 1 ; t k Þ:
0
ð26Þ
0
For any t; t A ½t k 1 ; t k Þ and t 4 t , by using Dynkin's formula and (26), we obtain
Z t
EVðt; xðtÞ; yðtÞ; iÞ EVðt 0 ; xðt 0 Þ; yðt 0 Þ; rðt 0 ÞÞ ¼
ELVðs; xðsÞ; yðsÞ; rðsÞÞ ds r 0;
ð27Þ
t0
which yields
EVðt; xðtÞ; yðtÞ; iÞ rEVðt 0 ; xðt 0 Þ; yðt 0 Þ; rðt 0 ÞÞ:
ð28Þ
1
diagfI; P 1l
g,
Furthermore, pre- and post-multiplication (4) by
"
#
"
#"
#"
I
0
I
P 1i ðI Dik ÞP 1l
P 1i ðI Dik ÞP 1l
Z03
1
0 P 1l
0
n
P 1l
n
P 1l
0
we get
#
"
Z03
1
P 1l
P 1i
I Dik
n
1
P 1l
#
Z 0 3 P 1i ðI Dik ÞT P 1l ðI Dik Þ Z 0:
ð29Þ
1
It should be noted that the last equivalent relation of (29) is obtained by Lemma 2.7. Similarly, pre- and post-multiply (5) by diagfI; P 2l
g,
we have
"
#
P 2i ðI H ik ÞP 2l
Z 0 3 P 2i ðI H ik ÞT P 2l ðI H ik Þ Z 0:
ð30Þ
n
P 2l
On the other hand, from system (2) it follows that
Z tk
Z tk
xðsÞ ds ¼ xðt k Þ Dik xðt k Þ C i
xðt k Þ C i
t k s1 ðt k Þ
Z
yðt k Þ Ei
tk
t k s2 ðt k Þ
t k s1 ðt k Þ
Z
yðsÞ ds ¼ yðt k Þ H ik yðt k Þ Ei
Combining (29) and (30), we have
Z tk
V 1 ðt k ; xðt k Þ; yðt k Þ; iÞ ¼ xðt k Þ C i
¼ xðt k Þ C i
t k s1 ðt k Þ
tk
t k s1 ðt k Þ
Z
yðt k Þ Ei
tk
t k s1 ðt k Þ
tk
t k s1 ðt k Þ
tk
t k s2 ðt k Þ
Z
xðsÞ ds ¼ ðI Dik Þ xðt k Þ C i
tk
Z
yðsÞ ds ¼ ðI H ik Þ yðt k Þ Ei
tk
t k s1 ðt k Þ
Z
xðsÞ ds þ yðt k Þ Ei
Z
xðsÞ ds ðI Dik ÞT P 1i ðI Dik Þ xðt k Þ C i
tk
t k s1 ðt k Þ
tk
t k s2 ðt k Þ
Z
þ yðt k Þ Ei
t k s2 ðt k Þ
Z
yðsÞ ds Ei
Z
xðsÞ ds P 1i xðt k Þ C i
Z
Z
þ yðt k Þ Ei
tk
Z
xðsÞ ds C i
tk
t k s2 ðt k Þ
yðsÞ ds ðI H ik ÞT P 2i ðI H ik Þ
Z
yðsÞ ds r xðt k Þ C i
tk
t k s1 ðt k Þ
T Z
yðsÞ ds P 2i yðt k Þ Ei
tk
t k s1 ðt k Þ
t k s2 ðt k Þ
yðsÞ ds :
Z
yðsÞ ds P 2i yðt k Þ Ei
tk
t k s2 ðt k Þ
tk
t k s2 ðt k Þ
xðsÞ ds
T Z
xðsÞ ds P 1i xðt k Þ C i
tk
t k s2 ðt k Þ
xðsÞ ds ;
tk
t k s1 ðt k Þ
yðsÞ ds
xðsÞ ds
yðsÞ ds ¼ V 1 ðt k ; xðt k Þ; yðt k Þ; iÞ;
which verifies that
Vðt k ; xðt k Þ; yðt k Þ; iÞ r Vðt k ; xðt k Þ; yðt k Þ; iÞ;
kAZþ :
ð31Þ
By (28), (31) and the mathematical introduction (for instance, see [19,26] in detail), we easily obtain
Vðt; xðtÞ; yðtÞ; iÞ r Vð0; xð0Þ; yð0Þ; rð0ÞÞ;
t Z 0:
Then, it follows from Lemma 2.6 that
( Z
( Z
2 )
t
t
xðsÞ
ds
E C
¼
E
Ci
i
t s1 ðtÞ
t s1 ðtÞ
r
λ
E
λmin ðQ 3 Þ
2
max ðC i Þ
(Z
T xðsÞ ds
t
t s1 ðtÞ
Z
Ci
)
t
t s1 ðtÞ
T Z
xðsÞ ds Q 3
xðsÞ ds
r λmax ðC 2i ÞE
)
t
t s1 ðtÞ
xðsÞ ds
(Z
r s1 ðtÞ
t
t s1 ðtÞ
T Z
xðsÞ ds
λ
E
λmin ðQ 3 Þ
2
max ðC i Þ
Z
t
t s1 ðtÞ
t
t s1 ðtÞ
)
xðsÞ ds
xT ðsÞQ 3 xðsÞ ds
Q. Zhu et al. / Neurocomputing 136 (2014) 136–151
Z
Z
t
λmax ðC 2i Þ
λmax ðC 2i Þ β0 ðt s1 Þ
E
e
xT ðsÞQ 3 xðsÞ ds r s1
E
λmin ðQ 3 Þ
λmin ðQ 3 Þ
t s1
λmax ðC 2i Þ β0 ðt s1 Þ
e
r s1
EV ð0; xð0Þ; rð0ÞÞ; t Z 0:
λmin ðQ 3 Þ
r s1
Similarly, we have
( Z
2 )
t
λmax ðE2i Þ β0 ðt s2 Þ
yðsÞ ds
EVð0; yð0Þ; rð0ÞÞ;
E E i
r s2 λ ðR Þ e
t s2 ðtÞ
min 3
145
t
t s1
eβ0 s xT ðsÞQ 3 xðsÞ ds
ð32Þ
t Z 0:
ð33Þ
A direct computation yields
EVð0; xð0Þ; yð0Þ; rð0ÞÞ
r 2λ maxðP 1i Þ 1 þ s21 max C i þ s1 λmax ðQ 3 Þ þ δ1 λmax ðR1 Þ þ δ2 λmax ðR2 Þ þ δ2 ðλmax ðR11 Þ þ λmax ðR12 Þ þ λmax ðRT12 Þ þ λmax ðR22 ÞÞ
iAS
þs
iAS
2 β 0 s1
max ðQ 4 Þ þ
1e
λ
þ δ2 eβ0 δ2 λmax ðZ 2 Þ
i
s21 eβ0 s1 λmax ðQ 5 Þ þ δ22 ðδ2 δ1 Þeβ0 δ2 λmax ðQ 6 Þ þ ðδ2 δ1 Þ2 eβ0 ðδ2 δ1 Þ λmax ðQ 7 Þ þ k2 λmax ðS2 Þ þ k4 λmax ðS4 Þ þ k5 λmax ðS5 Þ
sup
δrθr0
EjϕðθÞj2 þ 2λ maxðP 2i Þ 1 þ s22 max Ei þ τ1 λmax ðQ 1 Þ þ τ2 λmax ðQ 2 Þ þ s2 λmax ðR3 Þ
þ τ2 ðλmax ðQ 11 Þ þ λmax ðQ 12 Þ þ λ
iAS
iAS
λ
T
max ðQ 12 Þ þ max ðQ 22 ÞÞ þ
s
2 β 0 s2
max ðR4 Þ þ
2e
λ
s22 eβ0 s2 λmax ðR5 Þ þ τ22 ðτ2 τ1 Þeβ0 τ2 λmax ðR6 Þ
þðτ2 τ1 Þ2 eβ0 ðτ2 τ1 Þ λmax ðR7 Þ þ k1 λmax ðS1 Þ þ k3 λmax ðS3 Þ þ k6 λmax ðS6 Þ þ τ2 eβ0 τ2 λmax ðZ 1 Þ
On the other hand, it follows from the definition of Vðt; xðtÞ; yðtÞ; iÞ that
T Z t
Z t
Z
xðsÞ ds P 1i xðtÞ C i
xðsÞ ds þ eβ0 t yðtÞ Ei
Vðt; xðtÞ; yðtÞ; iÞ Z eβ0 t xðtÞ C i
t s1 ðtÞ
"
t s1 ðtÞ
i
sup
Ejψ ðθÞj2 :
ð34Þ
τrθr0
t
t s2 ðtÞ
T Z
yðsÞ ds P 2i yðtÞ Ei
t s2 ðtÞ
2
2 #
Z t
xðsÞ ds
λ
ðP
Þ
yðtÞ
E
yðsÞ
ds
þ
Zeβ0 t
i
min 2i t s1 ðtÞ
t s2 ðtÞ
"
2 2 #
Z t
Z t
;
Zeβ0 t minfλmin ðP 1i Þ; λmin ðP 2i Þg xðtÞ
C
xðsÞ
ds
þ
yðtÞ
E
yðsÞ
ds
i
i
Z
λmin ðP 1i ÞxðtÞ C i
t
yðsÞ ds
t
t s1 ðtÞ
t s2 ðtÞ
which together with the fact
EVð0; xð0Þ; yð0Þ; rð0ÞÞ Z EVðt; xðtÞ; yðtÞ; iÞ
yields
"
Z
E xðtÞ C i
t
t s1 ðtÞ
2 Z
xðsÞ ds
þ yðtÞ Ei
t
t s2 ðtÞ
2 #
yðsÞ ds
r
"
e β0 t
Θ1 sup EjϕðθÞj2 þ Θ2
minfλmin ðP 1i Þ; λmin ðP 2i Þg
δrθr0
#
sup
τ rθ r0
Ejψ ðθÞj2 ;
ð35Þ
where
Θ1 ¼ 2λmaxðP 1i Þ 1 þ s21 maxC i þ s1 λmax ðQ 3 Þ þ δ1 λmax ðR1 Þ þ δ2 λmax ðR2 Þ
iAS
iAS
þ δ2 ðλmax ðR11 Þ þ λmax ðR12 Þ þ λmax ðRT12 Þ þ λmax ðR22 ÞÞ þ s21 eβ0 s1 λmax ðQ 4 Þ þ s21 eβ0 s1 λmax ðQ 5 Þ þ δ2 ðδ2 δ1 Þeβ0 δ2 λmax ðQ 6 Þ
i
þðδ2 δ1 Þ2 eβ0 ðδ2 δ1 Þ λmax ðQ 7 Þ þ k2 λmax ðS2 Þ þ k4 λmax ðS4 Þ þ k5 λmax ðS5 Þ þ δ2 eβ0 δ2 λmax ðZ 2 Þ ;
Θ2 ¼ 2λmaxðP 2i Þ 1 þ s22 maxEi þ τ1 λmax ðQ 1 Þ þ τ2 λmax ðQ 2 Þ þ s2 λmax ðR3 Þ þ τ2 ðλmax ðQ 11 Þ þ λmax ðQ 12 Þ þ λmax ðQ T12 Þ þ λmax ðQ 22 ÞÞ
2
iAS
þs
iAS
2 β 0 s2
max ðR4 Þ þ
2e
þ τ2
λ
i
max ðZ 1 Þ :
s22 eβ0 s2 λmax ðR5 Þ þ τ22 ðτ2 τ1 Þeβ0 τ2 λmax ðR6 Þ þ ðτ2 τ1 Þ2 eβ0 ðτ2 τ1 Þ λmax ðR7 Þ þ k1 λmax ðS1 Þ þk3 λmax ðS3 Þ þ k6 λmax ðS6 Þ
eβ0 τ2 λ
By (32)–(35), we have
( Z
2 )
Z t
t
C i
xðsÞ ds þ xðtÞ C i
xðsÞ ds
t s1 ðtÞ
t s1 ðtÞ
( Z
( Z
(
2 )
2 )
2 )
Z t
Z t
t
t
þ E Ei
yðsÞ ds þ yðtÞ Ei
yðsÞ ds r 2E C i
xðsÞ ds þ 2E xðtÞ C i
xðsÞ ds
t s2 ðtÞ
t s2 ðtÞ
t s1 ðtÞ
t s1 ðtÞ
( Z
)
(
)
2
2
Z t
t
2s1 λmax ðC 2i Þ β0 ðt s1 Þ
þ 2E yðsÞ ds
yðsÞ ds
Θ1 sup EjϕðθÞj2
E i
þ 2E yðtÞ Ei
r λ ðQ Þ e
t s2 ðtÞ
t s2 ðtÞ
min
3
δrθr0
"
#
2
β0 t
2s2 λmax ðEi Þ β0 ðt s2 Þ
e
e
þ
Θ2 sup Ejψ ðθÞj2 þ
Θ1 sup EjϕðθÞj2 þ Θ2 sup Ejψ ðθÞj2
λmin ðR3 Þ
min fλmin ðP 1i Þ; λmin ðP 2i Þg
τrθr0
δrθr0
τrθr0
"
#
2
2s1 λmax ðC i Þ β0 s1
1
e
þ
Θ1 sup EjϕðθÞj2
r e β0 t
λmin ðQ 3 Þ
minfλmin ðP 1i Þ; λmin ðP 2i Þg
δrθr0
n
2 o
2 E xðtÞ þ yðtÞ ¼ E
146
Q. Zhu et al. / Neurocomputing 136 (2014) 136–151
"
þe
β0 t
#
2s2 λmax ðE2i Þ β0 s2
1
e
þ
Θ2 sup Ejψ ðθÞj2 :
λmin ðR3 Þ
minfλmin ðP 1i Þ; λmin ðP 2i Þg
τrθr0
ð36Þ
Therefore, by Definition 2.3 we see that the trivial solution of Eq. (2) is globally exponentially stable in the mean square. This completes
the proof of Theorem 3.1. □
Remark 3.2. In particular, if s1 ðtÞ s1 , s2 ðtÞ s2 and 0 r τ1 r τðtÞ r τ2 , τ_ ðtÞ r τμ , 0 r δ1 r δðtÞ r δ2 , δ_ ðtÞ r δμ , where s1 , s2 , τ1 , τ 2 , τμ , δ1 ,
δ2 , δμ are known constants. Then system (2) will be reduced to the following delayed BAM neural networks with Markovian jumping
parameters and leakage time varying delays:
8
_ ¼ CðrðtÞÞxðt s1 Þ þ AðrðtÞÞf ð1Þ ðyðtÞÞ þ BðrðtÞÞf ð2Þ ðyðt τðtÞÞ;
t a tk
xðtÞ
>
>
>
n
o
>
Rt
>
>
< Δxðt k Þ ¼ xðt k Þ xðt k Þ ¼ Dk ðrðtÞÞ xðt k Þ CðrðtÞÞ t kk s1 xðsÞ ds ; k A Z þ ;
ð37Þ
ð1Þ
ð2Þ
>
_
>
> yðtÞ ¼ EðrðtÞÞyðt s2 Þ þ FðrðtÞÞg nðxðtÞÞ þ GðrðtÞÞgR ðxðt δðtÞÞ;o t a t k
>
>
>
: Δyðt k Þ ¼ yðt k Þ yðt k Þ ¼ H k ðrðtÞÞ yðt k Þ EðrðtÞÞ ttk s yðsÞ ds ; k A Z þ ;
2
k
where CðrðtÞÞ, AðrðtÞÞ, BðrðtÞÞ, FðrðtÞÞ, GðrðtÞÞ, Dk ðrðtÞÞ, and H k ðrðtÞÞ will be written as C i , Ai , Bi , F i , Gi , Dik and H ik , respectively.
Corollary 3.3. Let β0 be a fixed positive constant and assume that Assumptions 2.1 and 2.2 hold. Then the equilibrium point of Eq. (37) is
globally exponentially stable in the mean square, if there exist positive definite matrices X 1 , X 2 , Q 1 , Q 2 , Q 3 , Q 4 , Q 5 , Q 6 , Q 7 , R1 , R2 , R3 , R4 , R5 , R6 ,
R7 , S1 , S2 , S3 , S4 , S5 , S6 , Z 1 , Z 2 , P 1i , P 2i , iA S, the matrices ðQn11 QQ 12
Þ 4 0, ðRn11 RR12
Þ 4 0 and positive diagonal matrices M 1 , M 2 , M 3 , M 4 , any appropriate
22
22
dimensions matrices Y 1 , Y 2 , N 1 , N 2 , N 3 , O1 , O2 , O3 such that the following linear matrix inequalities (LMIs) hold:
Π i ¼ ðΛi;j Þ2222 o 0
"
"
P 1i
ðI Dik ÞP 1l
n
P 1l
P 2i
n
"
"
Y1
n
Z1
n
Z0
#
ðI H ik ÞP 2l
Z0
P 2l
X1
X2
ð38Þ
#
½here rðt k Þ ¼ l;
ð39Þ
½here rðt k Þ ¼ l;
ð40Þ
#
Z 0;
ð41Þ
#
Y2
Z 0;
Z2
ð42Þ
where the symbol “n” denotes the symmetric term of the matrix,
N
Λ1;1 ¼ β0 P 1i þ ∑ δij P 1j P 1i C i C Ti P 1i þ τ2 X 1 þ Q 3 þR1 þ R2 þ R11 þ s21 eβ0 s1 Q 4 Q 5 ðδ2 δ1 ÞQ 6 4δ22 eβ0 δ2 S2 4ðδ2 δ1 Þ2 eβ0 ðδ2 δ1 Þ S4
j¼1
4s21 eβ0 s1 S5 þV 1 M 3 V 1 þ ðN 1 þ N T1 Þ;
Λ1;9 ¼ 4ðδ2 δ1 Þeβ0 ðδ2 δ1 Þ S4 ;
Λ1;2 ¼ Q 5 ;
Λ1;4 ¼ ðδ2 δ1 ÞQ 6 ;
Λ1;5 ¼ N 1 þ NT2 ;
N
Λ1;10 ¼ β0 P 1i C i ∑ δij P 1j C j þ C Ti P 1i C i þ4s1 eβ0 s1 S5 ;
j¼1
Λ1;6 ¼ R12 ;
Λ1;11 ¼ Y 1 þ Y T2 ;
Λ1;8 ¼ 4eβ0 δ2 δ2 S2 ;
Λ1;15 ¼ Y 1 ;
Λ1;16 ¼ P 1i Ai ;
Λ2;2 ¼ Q 5 e β0 s1 Q 3 þ s21 eβ0 s1 C Ti Q 5 C i þ δ22 ðδ2 δ1 Þeβ0 δ2 C Ti Q 6 C i þ ðδ2 δ1 Þ2 eβ0 ðδ2 δ1 Þ C Ti Q 7 C i
2
k2 C Ti S2 C i k4 C Ti S4 C i k5 C Ti S5 C i þ δ2 eβ0 δ2 C Ti Z 2 C i ; Λ2;16 ¼ s21 eβ0 s1 C Ti Q 5 Ai δ2 ðδ2 δ1 Þeβ0 δ2 C Ti Q 6 Ai
T
T
T
T
2 β0 ðδ2 δ1 Þ T
β
δ
2
ðδ 2 δ 1 Þ e
C i Q 7 Ai þ k2 C i S2 Ai þ k4 C i S4 Ai þk5 C i S5 Ai δ2 e 0 C i Z 2 Ai ;
Λ2;17 ¼ s21 eβ0 s1 C Ti Q 5 Bi δ22 ðδ2 δ1 Þeβ0 δ2 C Ti Q 6 Bi ðδ2 δ1 Þ2 eβ0 ðδ2 δ1 Þ C Ti Q 7 Bi
þ k2 C Ti S2 Bi þ k4 C Ti S4 Bi þk5 C Ti S5 Bi δ2 eβ0 δ2 C Ti Z 2 Bi ; Λ3;3 ¼ e β0 δ1 R1 e β0 δ1 Q 7 ;
Λ3;4 ¼ e β0 δ1 Q 7 ;
T
β 0 δ2
β 0 δ1
β 0 δ2
Λ4;4 ¼ e
R2 e
Q 7 ðδ2 δ1 ÞQ 6 ; Λ5;5 ¼ e
R11 þ V 2 M 4 V 2 ðN 2 þ N 2 Þ;
Λ5;7 ¼ e β0 δ2 R12 ; Λ5;11 ¼ Y T2 ;
Λ5;21 ¼ N 2 NT3 ; Λ6;6 ¼ R22 þ s22 eβ0 s2 F Ti R5 F i þ τ22 ðτ2 τ1 Þeβ0 τ2 F Ti R6 F i
þ ðτ2 τ1 Þ2 eβ0 ðτ2 τ1 Þ F Ti R7 F i k1 F Ti S1 F i k3 F Ti S3 F i k6 F Ti S6 F i þ τ2 eβ0 τ2 F Ti Z 1 F i M 3 ;
Λ6;7 ¼ s22 eβ0 s2 F Ti R5 Gi þ τ22 ðτ2 τ1 Þeβ0 τ2 F Ti R6 Gi þ ðτ2 τ1 Þ2 eβ0 ðτ2 τ1 Þ F Ti R7 Gi k1 F Ti S1 Gi k3 F Ti S3 Gi k6 F Ti S6 Gi þ τ2 eβ0 τ2 F Ti Z 1 Gi ;
Λ6;11 ¼ F Ti P 2i ; Λ6;12 ¼ s22 eβ0 s2 F Ti R5 Ei τ22 ðτ2 τ1 Þeβ0 τ2 F Ti R6 Ei ðτ2 τ1 Þ2 eβ0 ðτ2 τ1 Þ F Ti R7 Ei
þ k1 F Ti S1 Ei þ k3 F Ti S3 Ei þ k6 F Ti S6 Ei τ2 eβ0 τ2 F Ti Z 1 Ei ; Λ6;20 ¼ F Ti P 2i Ei ;
Λ7;7 ¼ e β0 δ2 R22 þ s22 eβ0 s2 GTi R5 Gi þ τ22 ðτ2 τ1 Þeβ0 τ2 GTi R6 Gi þðτ2 τ1 Þ2 eβ0 ðτ2 τ1 Þ GTi R7 Gi k1 GTi S1 Gi k3 GTi S3 Gi k6 GTi S6 Gi
þ τ2 eβ0 τ2 GTi Z 1 Gi M 4 ; Λ7;11 ¼ GTi P 2i ; Λ7;12 ¼ s22 eβ0 s2 GTi R5 Ei τ22 ðτ2 τ1 Þeβ0 τ2 GTi R6 Ei
ðτ2 τ1 Þ2 eβ0 ðτ2 τ1 Þ GTi R7 Ei þ k1 GTi S1 Ei þ k3 GTi S3 Ei þ k6 GTi S6 Ei τ2 eβ0 τ2 GTi Z 1 Ei ; Λ7;20 ¼ GTi P 2i Ei ;
Λ1;17 ¼ P 1i Bi ;
Λ1;21 ¼ N 1 þN T3 ;
Λ8;8 ¼ 4eβ0 δ2 S2 ;
Λ10;16 ¼
C Ti P 1i Ai ;
Λ9;9 ¼ 4eβ0 ðδ2 δ1 Þ S4 ;
Λ10;17 ¼
C Ti P 1i Bi ;
N
Λ10;10 ¼ β0 C Ti P 1i C i þ ∑ δij C Tj P 1j C j Q 4 4eβ0 s1 S5 ;
j¼1
N
Λ11;11 ¼ β0 P 2i þ ∑ δij P 2j P 2i Ei ETi P 2i
j¼1
Q. Zhu et al. / Neurocomputing 136 (2014) 136–151
147
þ δ2 X 2 þ Q 1 þQ 2 þ R3 þ Q 11 þ s22 eβ0 s2 R4 R5 ðτ2 τ1 ÞR6 4τ22 eβ0 τ2 S1 4ðτ2 τ1 Þ2 eβ0 ðτ2 τ1 Þ S3 4s22 eβ0 s2 S6 þ U 1 M 1 U 1 þðO1 þ OT1 Þ;
Λ11;12 ¼ R5 ;
Λ11;14 ¼ ðτ2 τ1 ÞR6 ;
Λ11;15 ¼ O1 þ OT2 ;
Λ11;18 ¼ 4eβ0 τ2 τ2 S1 ;
Λ11;16 ¼ Q 12 ;
N
Λ11;20 ¼ β0 P 2i Ei ∑ δij P 2j Ej þ ETi P 2i Ei þ 4eβ0 s2 s2 S6 ;
Λ11;19 ¼ 4ðτ2 τ1 Þeβ0 ðτ2 τ1 Þ S3 ;
Λ11;22 ¼ O1 þ OT3 ;
j¼1
Λ12;12 ¼ R5 e β0 s2 R3 þ s22 eβ0 s2 ETi R5 Ei þ τ22 ðτ2 τ1 Þeβ0 τ2 ETi R6 Ei þ ðτ2 τ1 Þ2 eβ0 ðτ2 τ1 Þ ETi R7 Ei k1 ETi S1 Ei k3 ETi S3 Ei k6 ETi S6 Ei
þ τ2 eβ0 τ2 ETi Z 1 Ei ; Λ13;13 ¼ e β0 τ1 Q 1 e β0 τ1 R7 ;
Λ13;14 ¼ e β0 τ1 R7 ;
Λ14;14 ¼ e β0 τ2 Q 2 e β0 τ1 R7 ðτ2 τ1 ÞR6 ;
Λ15;17 ¼ e β0 τ2 Q 12 ;
Λ15;15 ¼ e β0 τ2 Q 11 þ U 2 M 2 U 2 ðO2 þ OT2 Þ;
Λ16;16 ¼ Q 22 þ s21 eβ0 s1 ATi Q 5 Ai þ δ22 ðδ2 δ1 Þeβ0 δ2 ATi Q 6 Ai þðδ2 δ1 Þ2 eβ0 ðδ2 δ1 Þ ATi Q 7 Ai
Λ15;22 ¼ O2 OT3 ;
k2 ATi S2 Ai k4 ATi S4 Ai k5 ATi S5 Ai þ 2 eβ0 δ2 ATi Z 2 Ai M 1 ;
δ
2
2 β0 s1 T
Ai Q 5 Bi þ 2 ð 2 1 Þeβ0 δ2 ATi Q 6 Bi þ ð 2 1 Þ2 eβ0 ðδ2 δ1 Þ ATi Q 7 Bi k2 ATi S2 Bi k4 ATi S4 Bi k5 ATi S5 Bi þ 2 eβ0 δ2 ATi Z 2 Bi ;
1e
Λ16;17 ¼ s
δ δ
β0 τ2
δ
δ
Λ17;17 ¼ e
Q 22 þ s
þ δ2 eβ0 δ2 BTi Z 2 Bi M 2 ;
δ
δ δ δ
Λ18;18 ¼ 4eβ0 τ2 S1 ;
β0
τ2
β0
þ
1
β
2
0
!
2τ 22 eβ0 τ2
β
2
0
þ
1
β
2
0
U j ¼ diagðu~ j1 ; u~ j2 ; …; u~ jn Þ;
v~ ji ¼ maxfjv~ ji j; jv~ jiþ jg;
!
2s21 eβ0 s1
β
2
0
; k6 ¼ 2s22 e2β0 s2
u~ ji ¼ maxfju~ ji j; ju~ jiþ jg;
s2
β0
δ2
k2 ¼ 2δ2 e2β0 δ2
2
;
β0
!
τ1 1
1 τ2
þ 2ðτ22 τ21 Þe2β0 τ2 β0 τ1
;
β0 β20
β20 β0
s1
Λ21;21 ¼ ðN 3 þ NT3 Þ;
j¼1
!
k3 ¼ 2ðτ22 τ21 Þeβ0 τ2
δ
Λ20;20 ¼ β0 ETi P 2i Ei þ ∑ δij ETj P 2j Ej 4eβ0 s2 S6 R4 ;
k1 ¼ 2τ22 e2β0 τ2
Λ22;22 ¼ ðO3 þ OT3 Þ;
δ
N
Λ19;19 ¼ 4eβ0 ðτ2 τ1 Þ S3 ;
k5 ¼ 2s21 e2β0 s1
δ
2
2 β 0 s1 T
Bi Q 5 Bi þ 2 ð 2 1 Þeβ0 δ2 BTi Q 6 Bi þ ð 2 1 Þ2 eβ0 ðδ2 δ1 Þ BTi Q 7 Bi k2 BTi S2 Bi k4 BTi S4 Bi k5 BTi S5 Bi
1e
þ
þ
k4 ¼ 2ðδ2 δ1 Þeβ0 δ2
2
1
β
2
0
!
2s22 eβ0 s2
β20
2
1
β
2
0
!
2δ2 eβ0 δ2
2
β20
!
;
!
δ1 1
1 δ2
2
2
þ 2ðδ2 δ1 Þe2β0 δ2 β0 δ1
;
β0 β20
β20 β0
;
V j ¼ diagðv~ j1 ; v~ j2 ; …; v~ jn Þ;
and the remaining terms are zero:
Remark 3.4. In this paper, Assumption 2.1 is weaker than those given in [24,25,39,40], since the constants u~ ij ; u~ ijþ ; v~ ij ; u~ ijþ ði ¼ 1; 2; …; nÞ
in Assumption 2.1 are allowed to be positive, negative, zero. Thus, the neuron activation functions in this paper may be nonmonotonic by
sigmoid activation function with Lipschitz conditions, which is more useful to get less conservative results.
Remark 3.5. In [8,9,19,39], the authors discussed the stability issue of BAM delayed neural networks with/without impulse control. But in
this paper, we have studied the mean square exponential stability of Markovian jump BAM neural networks with leakage time-varying
delays and impulse control. Moreover, different from the previous literature, our results are derived by constructing a new Lyapunov–
Krasovskii functional with triple integral terms. In addition, we introduce some free weighting matrices in Theorem 3.1 for getting the
feasible LMIs. These feasible LMIs are sufficient conditions for the proposed stability analysis of Markovian jump BAM neural networks.
Remark 3.6. Recently, some authors have discussed the stability problem for BAM neural networks through introducing suitable
Lyapunov–Krasovskii functionals and LMI techniques. Various methods such as delay partitioning approach, delay decomposition
approach, some inequalities techniques were explored and developed in the literature [41–43], which improved the performance of delaydependent stability results and some of them [44] can be applied to time delay systems with unknown delays. However, those methods
cannot be applied to systems with leakage time-varying delays due to the existence of the term s_ in systems. In this paper, we first
consider the leakage time varying delays for a class of Markovian jump BAM neural networks by using the model transformation
technique. Hence, the results presented in this paper are essentially new.
4. Numerical example
Example 4.1. Consider the case of two-dimensional Markovian jump neural networks (2) with t 0 ¼ 0, t k ¼ t k 1 þ0:2k, k ¼ 1;
2; …; xðtÞ ¼ ðx1 ðtÞ; x2 ðtÞÞT , yðtÞ ¼ ðy1 ðtÞ; y2 ðtÞÞT . r(t) is a right-continuous Markov chain taking values in S ¼ f1; 2g with generator Γ ¼
½ 510 105. Take τðtÞ ¼ 0:6 cos t þ 0:7, δðtÞ ¼ 0:6 sin t þ 0:7, s1 ðtÞ ¼ 0:1 cos t þ 0:1, s2 ðtÞ ¼ 0:1 sin t þ 0:1,
(
0:04 sin ðxÞ; x Z0
ðjÞ
ðxÞ
¼
ði ¼ 1; 2; j ¼ 1; 2Þ:
f i ðxÞ ¼ g ðjÞ
i
0:07x;
x o0
Obviously, Assumptions 2.1 and 2.2 hold.
0:6 1:2
0:5
0:4
A1 ¼
; B1 ¼
;
0:5
0:3
1:2 1:5
0:7
1:4
0:4
1:3
A2 ¼
; B2 ¼
;
1:4 0:5
0:5 0:4
0:6 1:2
0:4 1:2
F1 ¼
; G1 ¼
;
0:5
1:5
1:3 0:4
We now choose the parameters of (2) as follows:
1:2 0
0:2 0
C1 ¼
; D1 ¼
;
0 1:2
0 0:2
1:1 0
0:3 0
C2 ¼
; D2 ¼
:
0 1:1
0 0:3
1:2 0
0:2 0
E1 ¼
; H1 ¼
;
0 1:2
0 0:2
148
Q. Zhu et al. / Neurocomputing 136 (2014) 136–151
F2 ¼
1:4
0:3
1:2
0:3
;
G2 ¼
0:5
1:2
0:4
1:3
;
E2 ¼
1:3
0
0
1:3
;
H2 ¼
0:3
0
0
0:3
:
Taking β0 ¼ 0:2, and by using the MATLAB LMI toolbox, we can obtain the following feasible solution for the LMIs (3)–(7):
0:0019 0:0006
0:0019 0:0006
P 11 ¼
; P 12 ¼
;
0:0006 0:0018
0:0006 0:0017
0:0079 0:0026
0:0075 0:0022
; P 22 ¼
;
P 21 ¼
0:0026 0:0072
0:0022 0:0067
0:5756 0:1844
0:1461 0:0770
; X 2 ¼ 10 3 ;
X 1 ¼ 10 4 0:1844 0:5566
0:0770 0:1284
0:5098 0:2587
0:4858 0:2490
; Q 2 ¼ 10 3 ;
Q 1 ¼ 10 3 0:2587 0:4506
0:2490 0:4292
0:0013 0:0003
0:0099 0:0023
; Q4 ¼
;
Q3 ¼
0:0003 0:0012
0:0023 0:0090
0:0015 0:0002
0:1658 0:0347
Q5 ¼
; Q 6 ¼ 10 4 ;
0:0002 0:0014
0:0347 0:1481
0:1999 0:0424
0:2063 0:0661
; R1 ¼ 10 3 ;
Q 7 ¼ 10 4 0:0424 0:1798
0:0661 0:2003
0:1885 0:0629
0:0032 0:0013
; R3 ¼
;
R2 ¼ 10 3 0:0629 0:1840
0:0013 0:0028
0:0364 0:0098
0:0018 0:0007
R4 ¼
; R5 ¼
;
0:0098 0:0343
0:0007 0:0017
0:2465 0:1402
0:3041 0:1733
; R7 ¼ 10 4 ;
R6 ¼ 10 4 0:1402 0:2109
0:1733 0:2603
0:2099 0:1085
0 0
;
; Q 12 ¼
Q 11 ¼ 10 3 0:1085 0:1860
0 0
0:0107
0:0002
0:2893 0:0682
Q 22 ¼
; R11 ¼ 10 3 ;
0:0002
0:0175
0:0682 0:2799
0 0
0:0321
0:0067
; R22 ¼
;
R12 ¼
0 0
0:0067
0:0483
0:3592 0:2309
0:1843 0:0549
; S2 ¼ 10 5 ;
S1 ¼ 10 5 0:2309 0:3028
0:0549 0:1660
0:3417 0:1993
0:2066 0:0515
; S4 ¼ 10 4 ;
S3 ¼ 10 4 0:1993 0:2923
0:0515 0:1880
0:7324 0:1168
0:9174 0:3554
; S6 ¼ 10 3 ;
S5 ¼ 10 3 0:1168 0:6910
0:3554 0:8291
0:3255 0:1833
0:1942 0:0450
Z 1 ¼ 10 4 ; Z 2 ¼ 10 4 ;
0:1833 0:2797
0:0450 0:1779
0:0033 0:0009
0:0027 0:0007
; T2 ¼
;
T1 ¼
0:0009 0:0027
0:0007 0:0030
0:0048 0:0008
0:0108 0:0032
; W2 ¼
;
W1 ¼
0:0008 0:0044
0:0032 0:0104
0:0269
0
0:0042
0
0:0812
0
; M2 ¼
; M3 ¼
;
M1 ¼
0
0:0386
0
0:0042
0
0:0881
0:2844
0
0 0
0 0
; Y1 ¼
; Y2 ¼
;
M 4 ¼ 10 3 0
0:2755
0 0
0 0
0:0011
0:0000
0:0011
0:0000
; N2 ¼
;
N1 ¼
0:0000
0:0011
0:0000
0:0011
0:0012 0:0000
0:6802 0:2806
; O1 ¼ 10 3 ;
N3 ¼
0:0000 0:0012
0:2810 0:5960
0:3920 0:1373
0:7328 0:3082
; O3 ¼ 10 3 :
O2 ¼ 10 3 0:1382 0:3423
0:3083 0:6437
Hence, all of the conditions in Theorem 3.1 are satisfied and so the network (2) is globally exponentially stable in the mean square.
The simulation results are as follows: Fig. 1 is the state response of model 1 (i.e., the network (2) when rðtÞ ¼ 1) with the initial
condition ½x1 ðsÞ; x2 ðsÞT ¼ ½1; 2T ; ½y1 ðsÞ; y2 ðsÞT ¼ ½2; 1T , for 1:3 r t r 0, and Fig. 2 is the state response of model 2 (i.e., the network (24)
when rðtÞ ¼ 2) with the initial condition ½x1 ðsÞ; x2 ðsÞT ¼ ½1; 2T ; ½y1 ðsÞ; y2 ðsÞT ¼ ½2; 1T , for 1:3 rt r 0.
Remark 4.2. As discussed in Remarks 3.4–3.6, the criteria existing in [8,9,19,24,25,39–44] fail in Example 4.1.
Q. Zhu et al. / Neurocomputing 136 (2014) 136–151
149
1
x(t)
0
−1
x1(t)
x (t)
−2
2
0
1
2
3
4
5
t
2
y(t)
1
0
−1
y1(t)
y2(t)
0
1
2
3
4
5
t
Fig. 1. The state response of model 1 in Example 1.
2
x(t)
0
−2
−4
x (t)
1
x2(t)
0
1
2
3
4
5
t
4
y(t)
2
0
−2
y1(t)
y2(t)
0
1
2
3
4
5
t
Fig. 2. The state response of model 2 in Example 1.
5. Conclusion
In this paper, we have investigated the stability analysis problem for a class of Markovian jump BAM neural networks with impulse
control and leakage time-varying delays. As we know, the stability of BAM neural networks has been widely studied by many authors, but
they have seldom considered the effects of impulse control, leakage time varying delays and Markovian jump parameters. Moreover, a
new type of Lyapunov–Krasovskii functional with triple integral term and model transformation technique is also discussed in detail. In
addition, the stability criteria that depend on the upper bounds of the leakage time-varying delays and our results are presented in terms
of LMIs, which can be efficiently solved via a standard numerical package. Finally, an example is given to illustrate the usefulness of the
obtained results.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
B. Kosko, Bidirectional associative memories, IEEE Trans. Syst. Man Cybern. 18 (1988) 49–60.
B. Kosko, Adaptive bidirectional associative memories, Appl. Opt. 26 (1987) 4947–4960.
B. Kosko, Neural Networks and Fuzzy Systems—A Dynamical Systems Approach to Machine Intelligence, Prentice-Hall, Englewood Cliffs, NJ, 1992.
Y. Wang, L. Xie, C.E. de Souza, Robust control for a class of uncertain nonlinear systems, Syst. Control Lett. 19 (1992) 1339–1353.
J.H. Park, O.M. Kwon, On improved delay-dependent criterion for global stability of bidirectional associative memory neural networks with time-varying delays, Appl.
Math. Comput. 199 (2008) 435–446.
Z. Wang, Y. Liu, M. Li, X. Liu, Stability analysis for stochastic Cohen–Grossberg neural networks with mixed time delays, IEEE Trans. Neural Netw. 17 (2006) 816–820.
Q. Zhu, C. Huang, X. Yang, Exponential stability for stochastic jumping BAM neural networks with time-varying and distributed delays, Nonlinear Anal. Hybrid Syst. 5
(2011) 52–77.
H. Zhao, N. Ding, Dynamic analysis of stochastic bidirectional associative memory neural networks delays, Chaos Solitons Fractals 32 (2007) 1692–1702.
J.H. Park, Robust stability of bidirectional associative memory neural networks with time delays, Phys. Lett. A 349 (2006) 494–499.
150
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
[33]
[34]
[35]
[36]
[37]
[38]
[39]
[40]
[41]
[42]
[43]
[44]
Q. Zhu et al. / Neurocomputing 136 (2014) 136–151
V. Lakshmikantham, D. Bainov, P. Simeonov, Theory of Impulsive Differential Equations, World Scientific, Singapore, 1989.
Q. Wang, X. Liu, Exponential stability for impulsive delay differential equations by Razumikhin method, J. Math. Anal. Appl. 309 (2005) 462–473.
Y. Zhang, J. Sun, Stability of impulsive infinite delay differential equations, Appl. Math. Lett. 19 (2006) 1100–1106.
Y. Ji, H.J. Chizeck, Controllability, stabilizability and continuous-time Markovian jump linear quadratic control, IEEE Trans. Autom. Control 35 (1990) 777–788.
H. Zhang, M. Dong, Y. Wang, N. Sun, Stochastic stability analysis of neutral-type impulsive neural networks with mixed time-varying delays and Markovian jumping,
Neurocomputing 73 (2010) 2689–2695.
G. Wang, J. Cao, J. Liang, Exponential stability in the mean square for stochastic neural networks with mixed time-delays and Markovian jumping parameters, Nonlinear
Dyn. 57 (2009) 209–218.
J. Yu, G. Sun, Robust stabilization of stochastic Markovian jumping dynamical networks with mixed delays, Neurocomputing 86 (2012) 107–115.
Z. Wu, J.H. Park, H. Su, J. Chu, Passivity analysis of Markov jump neural networks with mixed time-delays and piece wise-constant transition rates, Nonlinear Anal. Real
World Appl. 13 (2012) 2423–2431.
H. Bao, J. Cao, Stochastic global exponential stability for neutral-type impulsive neural networks with mixed time-delays and Markovian jumping parameters, Commun.
Nonlinear Sci. Numer. Simul. 16 (2011) 3786–3791.
Q. Zhu, J. Cao, Stability analysis of Markovian jump stochastic BAM neural networks with impulsive control and mixed time delays, IEEE Trans. Neural Netw. Learn. Syst.
23 (2012) 467–479.
Y. Liu, Z. Wang, X. Liu, On delay-dependent robust exponential stability of stochastic neural networks with mixed time delays and Markovian switching, Nonlinear Dyn.
54 (2008) 199–212.
Q. Zhu, J. Cao, Stability of Markovian jump neural networks with impulse control and time varying delays, Nonlinear Anal. Real World Appl. 13 (2012) 2259–2270.
Z. Wang, Y. Liu, L. Yu, X. Liu, Exponential stability of delayed recurrent neural networks with Markovian jumping parameters, Phys. Lett. A 356 (2006) 346–352.
R. Rakkiyappan, P. Balasubramaniam, Dynamic analysis of Markovian jumping impulsive stochastic Cohen–Grossberg neural networks with discrete interval and
distributed time-varying delays, Nonlinear Anal. Hybrid Syst. 3 (2009) 408–417.
Y. Liu, Z. Wang, X. Liu, On delay-dependent robust exponential stability of stochastic neural networks with mixed time delays and Markovian switching, Nonlinear Dyn.
54 (2008) 199–212.
R. Rakkiyappan, P. Balasubramaniam, Dynamic analysis of Markovian jumping impulsive stochastic Cohen–Grossberg neural networks with discrete interval and
distributed time-varying delays, Nonlinear Anal. Hybrid Syst. 3 (2009) 408–417.
Q. Zhu, J. Cao, Robust exponential stability of Markovian jump impulsive stochastic Cohen–Grossberg neural networks with mixed time delays, IEEE Trans. Neural Netw.
21 (2010) 1314–1325.
Q. Zhu, J. Cao, Exponential stability of stochastic neural networks with both Markovian jump parameters and mixed time delays, IEEE Trans. Syst. Man Cybern. B 41
(2011) 341–353.
X. Li, R. Rakkiyappan, Stability results for Takagi–Sugeno fuzzy uncertain BAM neural networks with time delays in the leakage term, Neural Comput. Appl. 22 (2013)
203–219.
S. Lakshmanan, Ju H. Park, Tae H. Lee, H.Y. Jung, R. Rakkiyappan, Stability criteria for BAM neural networks with leakage delays and probabilistic time-varying delays,
Appl. Comput. Appl. 219 (2013) 9408–9423.
K. Gopalsamy, Leakage delays in BAM, J. Math. Anal. Appl. 325 (2007) 1117–1132.
C. Li, T. Huang, On the stability of nonlinear systems with leakage delay, J. Frankl. Inst. 346 (2009) 366–377.
P. Balasubramaniam, M. Kalpana, R. Rakkiyappan, Global asymptotic stability of BAM fuzzy cellular neural networks with time delay in the leakage term, discrete and
unbounded distributed delays, Math. Comput. Model. 53 (2011) 839–853.
P. Peng, Global attractive periodic solutions of BAM neural networks with continuously distributed delays in the leakage terms, Nonlinear Anal. Real World Appl. 11
(2010) 2141–2151.
X. Li, X. Fu, P. Balasubramaniam, R. Rakkiyappan, Existence, uniqueness and stability analysis of recurrent neural networks with time delay in the leakage term under
impulsive perturbations, Nonlinear Anal. Real World Appl. 11 (2010) 4092–4108.
P. Balasubramaniam, V. Vembarasan, Existence, asymptotic stability of BAM neural networks of neutral-type with impulsive effects and time delay in the leakage term,
Int. J. Comput. Math. 88 (2011) 3271–3291.
S. Boyd, L. Ghaoui, E. Feron, V. Balakrishnan, Linear Matrix Inequalities in System and Control Theory, SIAM, Philadelphia, PA, 1994.
Y.S. Moon, P. Park, W.H. Kwon, Y.S. Lee, Delay-dependent robust stabilization of uncertain state-delayed systems, Int. J. Control 74 (2001) 1447–1455.
K. Gu, V. Kharitonov, J. Chen, Stability of Time-delay Systems, Birkhauser, Massachusetts, 2003.
C. Li, W. Hu, S. Wu, Stochastic stability of impulsive BAM neural networks with time delays, Comput. Math. Appl. 16 (2011) 2313–2316.
Q. Song, Z. Wang, Stability analysis of impulsive stochastic Cohen–Grossberg neural networks with mixed time delays, Phys. Lett. A 387 (2008) 3314–3326.
W. Han, Y. Kao, L. Wang, Global exponential robust stability of static interval neural networks with S-type distributed delays, J. Frankl. Inst. 348 (2011) 2072–2081.
J. Park, O. Kwon, Further results on state estimation for neural networks of neutral-type with time-varying delay, Appl. Math. Comput. 208 (2009) 69–75.
Q. Song, J. Cao, Global robust stability of interval neural networks with multiple time-varying delays, Math. Comput. Simul. 74 (2007) 38–46.
Q. Song, Exponential stability of recurrent neural networks with both time-varying delays and general activation functions via LMI approach, Neurocomputing 71
(2008) 2823–2830.
Quanxin Zhu received the Ph.D. degree from Sun Yatsen (Zhongshan) University, Guangzhou, China, in probability and statistic. From July 2005
to May 2009, he was with the South China Normal University. From May 2009 to August 2012, he was with the Ningbo University. He is currently
a professor of Nanjing Normal University. Prof. Zhu is an associate editor of Transnational Journal of Mathematical Analysis and Applications and
he is a reviewer of Mathematical Reviews and Zentralblatt-Math. Prof. Zhu is a reviewer of more than 40 other journals and he is the author or
coauthor of more than 50 journal papers. His research interests include random processes, stochastic control, stochastic differential equations,
stochastic partial differential equations, stochastic stability, nonlinear systems, Markovian jump systems and stochastic complex networks.
R. Rakkiyappan undergraduated in the field of Mathematics during 1999–2002 from Sri Ramakrishna Mission Vidyalaya College of Arts and
Science. He postgraduated in Mathematics from PSG College of Arts and Science affiliated to Bharathiar University, Coimbatore, Tamil Nadu, India,
during 2002–2004. He was awarded the Doctor of Philosophy in 2011 from the Department of Mathematics, Gandhigram Rural University,
Gandhigram, Tamil Nadu, India. His research interests are in the field of qualitative theory of stochastic and impulsive systems, neural networks,
complex systems and fractional-order systems. He has published more than 50 papers in international journals. Now he is working as an
assistant professor in Department of Mathematics, Bharathiar University, Coimbatore.
Q. Zhu et al. / Neurocomputing 136 (2014) 136–151
151
A. Chandrasekar was born in 1989. He received the B.Sc. degree in Mathematics from Thiruvalluvar Government Arts college affiliated to Periyar
University, Salem in 2009 and the M.Sc. degree in Mathematics from Bharathiar University, Coimbatore, Tamil Nadu, India, in 2011. He received
the M.Phil degree from Department of Mathematics, Bharathiar University, Coimbatore, Tamil Nadu, India, in 2012. He is pursuing Ph.D. degree in
Mathematics in Bharathiar University, Coimbatore, Tamil Nadu, India. His research interests include neural networks, stochastic and impulsive
systems.