Computer Science and Application
Vol. 12  No. 12 ( 2022 ), Article ID: 59104 , 9 pages
10.12677/CSA.2022.1212279

基于Lyapunov-Krasovskii泛函的时变时滞神经网络稳定性分析

刘新英1,2

1天津工业大学计算机科学与技术学院,天津

2天津市自主智能技术与系统重点实验室,天津

收稿日期:2022年11月9日;录用日期:2022年12月8日;发布日期:2022年12月15日

摘要

本文研究了一类时变时滞神经网络的全局渐进稳定性问题。李雅普诺夫稳定性理论为分析具有时变时滞的神经网络的鲁棒性能提供了有力的工具。基于这一理论,在本文中首先选择了一个合适的增广型Lyapunov-Krasovskii泛函,该泛函中引入了一些延迟积分项和松弛矩阵,并融合一些其它的时滞信息、系统信息因素等,令各类信息间的关联程度更加紧密,增大最大可允许时延的上限,使得系统的稳定性结论的保守性降低。其次,通过对该泛函求导后出现的二次积分项进行适当的放缩来降低系统稳定性判据的保守性,本文中使用了一个新的积分不等式来估计该泛函导数中的二次积分项,建立了时变时滞神经网络全局渐进稳定性的新判据。最后,通过数值例子对本文所提结论的有效性进行了验证。

关键词

李雅普诺夫–克拉索夫斯基泛函,容许时滞上限,全局渐进稳定性,神经网络

Stability Analysis for Neural Networks with Time-Varying Delays Based on Lyapunov-Krasovskii Functional Approach

Xinying Liu1,2

1School of Computer Science and Technology, Tiangong University, Tianjin

2Tianjin Key Laboratory of Autonomous Intelligence Technology and Systems, Tianjin

Received: Nov. 9th, 2022; accepted: Dec. 8th, 2022; published: Dec. 15th, 2022

ABSTRACT

This paper is concerned with the problem of global asymptotic stability for a class of neural networks with time-varying delays. The Lyapunov stability theory provides a powerful tool for analyzing the robust performance of neural networks with time-varying delays. Based on this theory, first, by introducing some delay-product-type terms and relaxation matrices, a new augmented LKF is constructed, which contains more information on time-varying delay and system states, so that the correlation between all kinds of information is closer, the admissible delay upper bounds is increased, and the conservatism of the stability criteria of the system is reduced. Second, the quadratic integral term appearing after the derivative of LKF is scaled appropriately to reduce the conservatism of the stability criterion. A new integral inequality is used to estimate the quadratic integral term in the derivative of LKF, and a new criterion of global asymptotic stability of neural networks with time-varying delays is established. Finally, numerical examples are employed to illustrate the effectiveness of the proposed method.

Keywords:Lyapunov-Krasovskii Functional, Admissible Delay Upper Bounds, Global Asymptotic Stability, Neural Networks

Copyright © 2022 by author(s) and Hans Publishers Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

1. 引言

神经网络目前已广泛应用于不同的科学领域,如信号和图像处理、智能控制、信息处理和优化问题等 [1] [2]。神经网络的实际应用在很大程度上取决于其良好的动态特性 [3]。然而,在(大规模)神经网络的实现中,由于放大器的开关速度和神经元之间的通信速度有限,时间延迟的存在是不可避免的,这可能会影响系统的性能,甚至导致系统的混乱。因此,确定神经网络全局渐近稳定性的容许延迟上界具有重要意义。在过去二十年中,延迟神经网络的稳定性分析一直是一个热门的研究课题,并且已经报道了大量的稳定性结果。参见 [4] - [9]。

李雅普诺夫稳定性理论为分析时变时滞神经网络的鲁棒性能提供了有力的工具。基于这一理论,与系统稳定性相关的关键点取决于两个方面:一是构造合适的LKF (Lyapunov-Krasovskii Functional, LKF);另一个是尽可能严格地约束LKF导数中的二次型积分项。到目前为止,许多研究人员关注于LKF的构造,并且提出了很多不同类型的LKF:增广型LKF [10],延迟分区型LKF [11],多积分型LKF [12],或延迟积型LKF [13] 等。通过引入延迟积分项,可以构造一个新的增广型LKF,该泛函包含了更多的延迟状态和积分状态信息,这些信息对于获得更低保守性的结果非常重要。而在处理二次型的积分项方面,最常用的技术是积分不等式方法和自由权矩阵法。常用的积分不等式有:基于Wirtinger (WB)的不等式、基于自由矩阵(FMB)的不等式、基于辅助函数(AFB)的不等式和Bessel-Legendre (BL)不等式等。

基于上述分析,本文研究了一类具有时变时滞的广义神经网络的稳定性问题。首先,通过引入一些延迟积分项,构造了一个新的增广型LKF,该函数包含了更多时滞相关信息和系统状态信息;其次,使用了一个新的积分不等式来估计LKF导数中的二次积分项,建立了时变时滞神经网络稳定性的新判据。最后,通过数值算例说明了该方法的有效性。

2. 预备知识

考虑以下具有时变时滞的连续时间神经网络(平衡点移动到原点):

x ˙ ( t ) = A x ( t ) + W 0 g ( W x ( t ) ) + W 1 g ( W x ( t d ( t ) ) ) (1)

其中, x ( t ) = col { x 1 ( t ) , x 2 ( t ) , , x n ( t ) } 是神经元状态向量, g ( ) = col { g 1 ( ) , g 2 ( ) , , g n ( ) } 表示神经元激活函数且 g ( 0 ) = 0 A = diag { a 1 , a 2 , , a n } 是一个对角矩阵且 a i > 0 W W 0 W 1 分别是连接权重矩阵和延迟连接权重矩阵。

然后,我们有以下假设:

假设1:时变函数 d ( t ) 是可微的并且满足:

0 d ( t ) h , μ m d ˙ ( t ) μ M (2)

其中 h i , i = 1 , 2 μ m μ M 是实常数。

假设2:假设神经元激活函数 g i ( ) 有界并且满足:

l i g i ( n ) g i ( m ) n m l i + , n m , ( i = 1 , 2 , , N ) (3)

l i g i ( s ) s l i + , ( i = 1 , 2 , , N ) (4)

其中, l i l i + 是实常数。

本文旨在分析时变时滞神经网络(1)的全局渐进稳定性。为此,我们首先介绍一些引理如下:

引理1 [14]:x为区间 [ α , β ] n 的一个连续可微函数,对于一个正定矩阵 R n × n ,以下不等式成立:

α β x ˙ T ( s ) R x ˙ ( s ) d s 1 β α x 1 T R x 1 + 3 β α x 2 T R x 2

其中: x 1 = x ( β ) x ( α ) , x 2 = x ( β ) + x ( α ) 2 β α α β x ( s ) d s

引理2 [5]:x为区间 [ α , β ] n 的一个连续可微函数,对于整数 k N ,正定矩阵 R n × n ,任意向量 ξ k × n ,任意矩阵 N i k n × n ( i = 1 , 2 ) ,则有以下不等式成立:

α β x ˙ T ( s ) R x ˙ ( s ) d s ξ T [ ( β α ) ( N 1 R 1 N 1 T + 1 3 N 2 R 1 N 2 T ) + S y m ( N 1 E 1 + N 2 E 2 ) ]

其中: E 1 ξ = x ( β ) x ( α ) , E 2 ξ = x ( β ) + x ( α ) 2 β α α β x ( s ) d s

3. 主要研究成果

本节主要研究时变时滞神经网络的稳定性问题。首先,引入了一个新的LKF,该LKF包含了更多的时滞信息,延迟积信息以及更多其它系统信息;其次,应用引理2来估计LKF导数的二次积分项,进而得到了更低保守性的稳定性准则。

为了表述简单,定义以下符号:

h d ( t ) = h d ( t ) , v 1 ( t ) = 1 d ( t ) t d ( t ) t x ( s ) d s , v 2 ( t ) = 1 h d ( t ) t h t d ( t ) x ( s ) d s

ξ ( t ) = col { x ( t ) , x ( t d ( t ) ) , x ( t h ) , g ( W x ( t ) ) , g ( W x ( t d ( t ) ) ) , g ( W x ( t h ) ) , V 1 ( t ) , V 2 ( t ) , x ˙ ( t d ( t ) ) , x ˙ ( t h ) } (5)

首先,引入一个延迟积型LKF,如下:

V M U ( x t , t ) = V M ( x t , t ) + V U ( x t , t ) (6)

其中:

V M ( x t , t ) = d ( t ) t d ( t ) t x ˙ T ( s ) M 1 x ˙ ( s ) d s + h d ( t ) t h t d ( t ) x ˙ T ( s ) M 2 x ˙ ( s ) d s

V U ( x t , t ) = d ( t ) h ζ 1 T ( t ) U 1 ζ 1 ( t ) h d ( t ) h ζ 2 T ( t ) U 2 ζ 2 ( t )

其中: ζ 1 ( t ) = col { x ( t ) , x ( t d ( t ) ) , V 1 ( t ) } ζ 2 ( t ) = col { x ( t d ( t ) ) , x ( t h ) , V 2 ( t ) } M 1 M 2 U 1 U 2 是具有适当维数的正定矩阵。

如果我们假设:

U i = [ 4 M i 2 M i 6 M i * 4 M i 6 M i * * 12 M i ] , i = 1 , 2

则根据引理1可以得到如下关系式:

t d ( t ) t x ˙ T ( s ) M 1 x ˙ ( s ) d s 1 d ( t ) ζ 1 T ( t ) U 1 ζ 1 ( t ) 1 h ζ 1 T ( t ) U 1 ζ 1 ( t )

t h t d ( t ) x ˙ T ( s ) M 2 x ˙ ( s ) d s 1 h d ( t ) ζ 2 T ( t ) U 2 ζ 2 ( t ) 1 h ζ 2 T ( t ) U 2 ζ 2 ( t )

则LKF (6)是正定的。

以下定理给出了时变时滞神经网络(1)的稳定性判据。

定理1:对于给定的标量h和 μ ,当假设1和2成立时,如果满足以下条件,系统(1)是全局渐近稳定的。如果存在正定矩阵 P , Q 1 , Q 2 3 n × 3 n R , M 1 , M 2 n × n ;任意矩阵 N 1 , N 2 10 n × n 和正定对角矩阵 Λ j = diag { λ j 1 , λ j 2 , , λ j n } 0 Δ j = diag { δ j 1 , δ j 2 , , δ j n } 0 H i = diag { ħ i 1 , ħ i 2 , , ħ i n } n × n ( i = 1 , 2 , j = 1 , 2 , 3 ) ,满足以下线性矩阵不等式:

R d ˙ ( t ) M 1 > 0 , R + d ˙ ( t ) M 2 > 0 (7)

[ Φ ( 0 , d ˙ ( t ) ) h N 1 h N 2 * h R 2 ( d ˙ ( t ) ) 0 * * 3 h R 2 ( d ˙ ( t ) ) ] < 0 (8)

[ Φ ( h , d ˙ ( t ) ) h N 1 h N 2 * h R 1 ( d ˙ ( t ) ) 0 * * 3 h R 1 ( d ˙ ( t ) ) ] < 0 (9)

其中:

Φ ( d ( t ) , d ˙ ( t ) ) = Φ 1 + Φ 2 + Φ 3 + Φ 4 + Φ 5 + Φ 6 + Φ 7

Φ 1 = Sym { Π 1 T P Π 2 } Φ 2 = Sym { e 4 T ( H 1 H 2 ) W e s + e 1 T W T ( K 2 H 2 K 1 H 1 ) W e s }

Φ 3 = Π 3 T Q 1 Π 3 Π 5 T Q 2 Π 5 ( 1 d ˙ ( t ) ) Π 4 T ( Q 1 Q 2 ) Π 4 Φ 4 = h e s T R e s

Φ 5 = d ( t ) ( e s T M 1 e s ( 1 d ˙ ( t ) ) e 9 T M 1 e 9 ) + h d ( t ) ( ( 1 d ˙ ( t ) ) e 9 T M 2 e 9 e 10 T M 2 e 10 ) d ˙ ( t ) h ( Π 6 T U 1 Π 6 Π 7 T U 2 Π 7 ) Sym { 1 h Π 6 T U 1 Π 8 + 1 h Π 7 T U 2 Π 9 }

Φ 6 = sym ( N 1 E 1 + N 2 E 2 ) + sym ( N 1 E 3 + N 2 E 4 )

Φ 7 = i = 1 3 Sym { ( e i + 3 K 1 W 2 e i ) T Λ i ( K 2 W 2 e i e i + 3 ) } + i = 1 2 Sym { [ ( e i + 3 e i + 4 ) K 1 W 2 ( e i e i + 1 ) ] T Δ i [ K 2 W 2 ( e i e i + 1 ) ( e i + 3 e i + 4 ) ] } + Sym { [ ( e 4 e 6 ) K 1 W 2 ( e 1 e 3 ) ] T Δ 3 [ K 2 W 2 ( e 1 e 3 ) ( e 4 e 6 ) ] }

Π 1 = col { e 1 , e 2 , e 3 } Π 2 = col { e s , ( 1 d ˙ ( t ) ) e 9 , e 10 } Π 3 = col { e 1 , e 4 , e s }

Π 4 = col { e 2 , e 5 , e 9 } Π 5 = col { e 3 , e 6 , e 10 } Π 6 = col { e 1 , e 2 , e 7 } Π 7 = col { e 2 , e 3 , e 8 }

Π 8 = col { d ( t ) e s , d ( t ) ( 1 d ˙ ( t ) ) e 9 , e 1 ( 1 d ˙ ( t ) ) e 2 d ˙ ( t ) e 7 }

Π 9 = col { h d ( t ) ( 1 d ˙ ( t ) ) e 9 , h d ( t ) e 10 , ( 1 d ˙ ( t ) ) e 2 e 3 + d ˙ ( t ) e 8 }

E 1 = e 1 e 2 E 2 = e 1 + e 2 2 e 7 E 3 = e 2 e 3 E 4 = e 2 + e 3 2 e 8 e s = A e 1 + W 0 e 4 + W 1 e 5

e i = [ 0 n × ( i 1 ) n Ι n 0 n × ( 10 i ) n ] , i = 1 , 2 , , 10

K 1 = diag { l 1 , l 2 , , l n } K 2 = diag { l 1 + , l 2 + , , l n + }

证明:

构造如下Lyapunov-Krasovskii泛函:

V ( t ) = i = 1 5 V i ( t ) (10)

其中:

V 1 ( t ) = η 1 T ( t ) P η 1 ( t )

V 2 ( t ) = 2 i = 1 n 0 W i x ( t ) [ h 1 i ( g i ( s ) l i s ) + h 2 i ( l i + s g i ( s ) ) ] d s

V 3 ( t ) = t d ( t ) t η 2 T ( s ) Q 1 η 2 ( s ) d s + t h t d ( t ) η 2 T ( s ) Q 2 η 2 ( s ) d s

V 4 ( t ) = h 0 t + θ t x ˙ T ( s ) R x ˙ ( s ) d s d θ

V 5 ( t ) = d ( t ) t d ( t ) t x ˙ T ( s ) M 1 x ˙ ( s ) d s + ( h d ( t ) ) t h t d ( t ) x ˙ T ( s ) M 2 x ˙ ( s ) d s d ( t ) h ζ 1 T ( t ) U 1 ζ 1 ( t ) h d ( t ) h ζ 2 T ( t ) U 2 ζ 2 ( t )

其中: η 1 ( t ) = col { x ( t ) , x ( t d ( t ) ) , x ( t h ) } , η 2 ( s ) = col { x ( s ) , g ( W x ( s ) ) , x ˙ ( s ) }

ζ 1 T ( t ) ζ 2 T ( t ) U 1 U 2 已经在公式(6)中定义。

下面沿时变时滞神经网络(1)的轨迹对LKF (10)求导,可得:

V ˙ 1 ( t ) = 2 η 1 T ( t ) P η ˙ 1 ( t ) = 2 ξ T ( t ) Π 1 T P Π 2 ξ ( t ) (11)

V ˙ 2 ( t ) = 2 g T ( W x ( t ) ) ( H 1 H 2 ) W x ˙ ( t ) + 2 x T ( t ) W T ( K 2 H 2 K 1 H 1 ) W x ˙ ( t ) = ξ T ( t ) Φ 2 ξ ( t ) (12)

V ˙ 3 ( t ) = η 2 T ( t ) Q 1 η 2 ( t ) η 2 T ( t h ) Q 2 η 2 ( t h ) ( 1 d ˙ ( t ) ) η 2 T ( t d ( t ) ) ( Q 1 Q 2 ) η 2 ( t d ( t ) ) = ξ T ( t ) [ Π 3 T Q 1 Π 3 Π 5 T Q 2 Π 5 ( 1 d ˙ ( t ) ) Π 4 T ( Q 1 Q 2 ) Π 4 ] ξ ( t ) (13)

V ˙ 4 ( t ) = h x ˙ T ( t ) R x ˙ ( t ) t h t x ˙ T ( s ) R x ˙ ( s ) d s = ξ T ( t ) Φ 4 ξ ( t ) t h t x ˙ T ( s ) R x ˙ ( s ) d s (14)

V ˙ 5 ( t ) = d ˙ ( t ) t d ( t ) t x ˙ T ( s ) M 1 x ˙ ( s ) d s d ˙ ( t ) t h t d ( t ) x ˙ T ( s ) M 2 x ˙ ( s ) d s + d ( t ) x ˙ T ( t ) M 1 x ˙ ( t ) ( h d ( t ) ) x ˙ T ( t h ) M 2 x ˙ ( t h ) ( 1 d ˙ ( t ) ) x ˙ E ( t d ( t ) ) ( d ( t ) M 1 ( h d ( t ) ) M 2 ) x ˙ ( t d ( t ) ) d ˙ ( t ) h ζ 1 T ( t ) U 1 ζ 1 ( t ) 2 d ( t ) h ζ 1 T ( t ) U 1 ζ ˙ 1 ( t ) + d ˙ ( t ) h ζ 2 T ( t ) U 2 ζ 2 ( t ) 2 h d ( t ) h ζ 2 T ( t ) U 2 ζ ˙ 2 ( t ) = ξ T ( t ) Φ 5 ξ ( t ) + d ˙ ( t ) t d ( t ) t x ˙ T ( s ) M 1 x ˙ ( s ) d s d ˙ ( t ) t h t d ( t ) x ˙ T ( s ) M 2 x ˙ ( s ) d s (15)

其中, ξ ( t ) 在公式(5)中定义,其它符号均在定理1中定义。

下面处理公式(14)和(15)中的二次积分项。可以得到:

t h t x ˙ T ( s ) R x ˙ ( s ) d s = t d ( t ) t x ˙ T ( s ) R x ˙ ( s ) d s t h t d ( t ) x ˙ T ( s ) R x ˙ ( s ) d s

则将上述公式和公式(15)中的二次积分项合并为:

t d ( t ) t x ˙ T ( s ) ( R d ˙ ( t ) M 1 ) x ˙ ( s ) d s t h t d ( t ) x ˙ T ( s ) ( R + d ˙ ( t ) M 2 ) x ˙ ( s ) d s

考虑到 M 1 > 0 M 2 > 0 ,则可以得到:

( 7 ) { R d ˙ ( t ) M 1 > 0 R + d ˙ ( t ) M 2 > 0

利用引理2对合并后的二次积分项进行处理可得:

t d ( t ) t x ˙ T ( s ) R 1 ( d ˙ ( t ) ) x ˙ ( s ) d s ξ T ( t ) [ d ( t ) ( N 1 R 1 1 ( d ˙ ( t ) ) N 1 T + 1 3 N 2 R 1 1 ( d ˙ ( t ) ) N 2 T ) + sym ( N 1 E 1 + N 2 E 2 ) ] ξ ( t ) (16)

t h t d ( t ) x ˙ T ( s ) R 2 ( d ˙ ( t ) ) x ˙ ( s ) d s ξ T ( t ) [ ( h d ( t ) ) ( N 1 R 2 1 ( d ˙ ( t ) ) N 1 T + 1 3 N 2 R 2 1 ( d ˙ ( t ) ) N 2 T ) + sym ( N 1 E 3 + N 2 E 4 ) ] ξ ( t ) (17)

考虑到条件(3)和(4),得到以下不等式:

δ i ( s 1 , s 2 ) : = 2 [ g ( W x ( s 1 ) ) g ( W x ( s 2 ) ) K 1 W ( x ( s 1 ) x ( s 2 ) ) ] T × Δ i [ K 2 W ( x ( s 1 ) x ( s 2 ) ) g ( W x ( s 1 ) ) + g ( W x ( s 2 ) ) ] 0

其中 Λ i = diag { λ 1 i , λ 2 i , , λ n i } 0 ( i = 1 , 2 , 3 ) Δ i = diag { δ 1 i , δ 2 i , , δ n i } 0 ( i = 1 , 2 , 3 ) 。那么,以下不等式成立:

λ 1 ( t ) + λ 2 ( t d ( t ) ) + λ 3 ( t h ) 0

δ 1 ( t , t d ( t ) ) + δ 2 ( t d ( t ) , t h ) + δ 3 ( t , t h ) 0

于是,

ξ T ( t ) Φ 7 ξ ( t ) 0 (18)

其中 Φ 7 在定理1中定义。

最后,综合公式(11)~(18)可以得到:

V ˙ ( t ) ξ T ( t ) [ Φ ( d ( t ) , d ˙ ( t ) ) + d ( t ) ( N 1 R 1 1 ( d ˙ ( t ) ) N 1 T + 1 3 N 2 R 1 1 ( d ˙ ( t ) ) N 2 T ) + ( h d ( t ) ) ( N 1 R 2 1 ( d ˙ ( t ) ) N 1 T + 1 3 N 2 R 2 1 ( d ˙ ( t ) ) N 2 T ) ]

其中, Φ ( d ( t ) , d ˙ ( t ) ) 在定理1中定义。根据舒尔补理论,如果满足线性矩阵不等式(7)、(8)和(9),那么存在一个常数 ε > 0 ,当 u ( t ) 0 时,使得 V ˙ ( t ) ε ψ T ( t ) ψ ( t ) ε u T ( t ) u ( t ) < 0 。则时变时滞神经网络(1)在符合不等式(2),(3)和(4)的条件下是全局渐近稳定的。

4. 数值例子

例1:考虑如下参数的时变时滞神经网络系统(1):

A = [ 2 0 0 2 ] W 0 = [ 1 1 1 1 ] W 1 = [ 0.88 1 1 1 ] W 2 = [ 1 0 0 1 ]

K 1 = d i a g { 0 , 0 } K 2 = d i a g { 0.4 , 0.8 }

Table 1. The maximum admissible delay upper bounds h obtained for various values of μ for Example 1

表1. 例1给定不同的 μ ,得到的最大容许延迟上限h

该数值例子常用于与现有文献中的结果进行比较。表1分别列出了文献 [14] 的推论2、文献 [15] 的定理1、文献 [9] 的定理1、文献 [3] 的推论1和本文的定理1对于 μ M = μ m = μ { 0.8 , 0.9 } 所得到的最大容许延迟上限h。从表1中可以很明显的看出,本文的定理1与其它文献相比,可以得到更大的容许延迟上限。这表明本文提出的定理是有效的并且优于文献 [3] [9] [14] [15]。

例2:考虑如下参数的时变时滞神经网络系统(1):

A = [ 1.5 0 0 0.7 ] W 0 = [ 0.0503 0.0454 0.0987 0.2075 ] W 1 = [ 0.2381 0.9320 0.0388 0.5062 ] W 1 = [ 0.2381 0.9320 0.0388 0.5062 ]

K 1 = diag { 0 , 0 } K 2 = diag { 0.3 , 0.8 }

对于 μ M = μ m = μ { 0.4 , 0.45 , 0.5 } ,分别使用文献 [16] 的定理1,文献 [17] 的推论1和本文的定理1求出该时变时滞神经网络系统的最大容许延迟上限,如表2所示。通过该表格可以看出,本文的定理1可以得到更大的容许延迟上限。

Table 2. The maximum admissible delay upper bounds h obtained for various values of μ for Example 2

表2. 例2给定不同的 μ ,得到的最大容许延迟上限h

5. 总结

本文研究了一类广义时变时滞神经网络的稳定性问题。首先,通过选择一个合适的增广型LKF,该LKF包含了更多的时滞信息、系统信息、延迟信息等。其次,通过使用基于正交多项式的积分不等式来估计LKF导数中的二次积分项,建立了时变时滞神经网络全局渐近稳定性的新判据。最后,通过数值例子对本文所提结论的有效性进行了验证。

文章引用

刘新英. 基于Lyapunov-Krasovskii泛函的时变时滞神经网络稳定性分析
Stability Analysis for Neural Networks with Time-Varying Delays Based on Lyapunov-Krasovskii Functional Approach[J]. 计算机科学与应用, 2022, 12(12): 2754-2762. https://doi.org/10.12677/CSA.2022.1212279

参考文献

  1. 1. Chua, L.O. and Yang, L. (1988) Cellular Neural Networks: Applications. IEEE Transactions on Circuits and Systems, 35, 1273-1290. https://doi.org/10.1109/31.7601

  2. 2. Xia, Y.S. and Wang, J. (2004) A General Projection Neural Network for Solving Monotone Variational Inequalities and Related Optimization Problems. IEEE Transactions on Neural Networks, 15, 318-328. https://doi.org/10.1109/TNN.2004.824252

  3. 3. Zhang, X.M., Han, Q.L. and Wang, J. (2018) Admissible Delay Upper Bounds for Global Asymptotic Stability of Neural Networks with Time-Varying Delays. IEEE Transactions on Neural Networks and Learning Systems, 29, 5319-5329. https://doi.org/10.1109/TNNLS.2018.2797279

  4. 4. Zhang, H.G., Wang, Z.S. and Liu, D.R. (2014) A Comprehen-sive Review of Stability Analysis of Continuous-Time Recurrent Neural Networks. IEEE Transactions on Neural Net-works and Learning Systems, 25, 1229-1262. https://doi.org/10.1109/TNNLS.2014.2317880

  5. 5. Sun, L.K., Tang, Y.Q., Wang, W.R. and Shen, S.Q. (2020) Stability Analysis of Time-Varying Delay Neural Networks Based on New Integral in Equalities. Journal of the Franklin Institute, 357, 10828-10843. https://doi.org/10.1016/j.jfranklin.2020.08.017

  6. 6. Wang, B., Yan, J., Cheng, J. and Zhong, S.M. (2017) New Criteria of Stability Analysis for Generalized Neural Networks Subject to Time-Varying Delayed Signals. Applied Mathematics and Computation, 314, 322-333. https://doi.org/10.1016/j.amc.2017.06.031

  7. 7. Zhang, X.M., Han, Q.L. and Zeng, Z.G. (2017) Hierarchical Type Stability Criteria for Delayed Neural Networks via Canonical Bessel-Legendre Inequalities. IEEE Transactions on Cy-bernetics, 48, 1660-1671. https://doi.org/10.1109/TCYB.2017.2776283

  8. 8. Zhang, X.M. and Han, Q.L. (2011) Global Asymptotic Stability for a Class of Generalized Neural Networks with Interval Time-Varying Delays. IEEE Transactions on Neural Networks, 22, 1180-1192. https://doi.org/10.1109/TNN.2011.2147331

  9. 9. Wang, Z., Ding, S. and Zhang, H. (2017) Stability of Recurrent Neural Networks with Time-Varying Delay via Flexible Terminal Method. IEEE Transactions on Neural Networks and Learning Systems, 28, 2456-2463. https://doi.org/10.1109/TNNLS.2016.2578309

  10. 10. Ge, C., Hua, C.C. and Guan, X.P. (2013) New De-lay-Dependent Stability Criteria for Neural Networks with Time-Varying Delay Using Delay-Decomposition Approach. IEEE Transactions on Neural Networks and Learning Systems, 25, 1378-1383. https://doi.org/10.1109/TNNLS.2013.2285564

  11. 11. He, Y., Ji, M.D., Zhang, C.K. and Wu, M. (2016) Global Ex-ponential Stability of Neural Networks with Time-Varying Delay Based on Free-Matrix-Based Integral Inequality. Neu-ral Networks, 77, 80-86. https://doi.org/10.1016/j.neunet.2016.02.002

  12. 12. Lian, H.H., Xiao, S.P., Yan, H.C., Yang, F.W. and Zeng, H.B. (2020) Dissipativity Analysis for Neural Networks with Time-Varying Delays via a Delay-Product-Type Lyapunov Functional Approach. IEEE Transactions on Neural Networks and Learning Systems, 32, 975-984. https://doi.org/10.1109/TNNLS.2020.2979778

  13. 13. Seuret, A. and Gouaisbaut, F. (2013) Wirtinger-Based Integral Inequality: Application to Time-Delay Systems. Automatica, 49, 2860-2866. https://doi.org/10.1016/j.automatica.2013.05.030

  14. 14. Zhang, X.M. and Han, Q.L. (2014) Global Asymptotic Sta-bility Analysis for Delayed Neural Networks Using a Matrix-Based Quadratic Convex Approach. Neural Networks, 54, 57-69. https://doi.org/10.1016/j.neunet.2014.02.012

  15. 15. Kwon, O.M., Park, M.J., Lee, S.M., Park, J.H. and Cha, E.J. (2013) Stability for Neural Networks with Time-Varying Delays via Some New Approaches. IEEE Transactions on Neural Networks and Learning Systems, 24, 181-193. https://doi.org/10.1109/TNNLS.2012.2224883

  16. 16. Zhang, C.K., He, Y., Jiang, L. and Wu, M. (2016) Stability Analysis for Delayed Neural Networks Considering Both Conservativeness and Complexity. IEEE Transactions on Neural Networks and Learning Systems, 27, 1486-1501. https://doi.org/10.1109/TNNLS.2015.2449898

  17. 17. Zhang, C.K., He, Y., Jiang, L., Lin, W.J. and Wu, M. (2017) Delay-Dependent Stability Analysis of Neural Networks with Time-Varying Delay: A Generalized Free-Weighting-Matrix Approach. Applied Mathematics and Computation, 294, 102-120. https://doi.org/10.1016/j.amc.2016.08.043

期刊菜单