4.7 Article

Improved GNN method with finite-time convergence for time-varying Lyapunov equation

期刊

INFORMATION SCIENCES
卷 611, 期 -, 页码 494-503

出版社

ELSEVIER SCIENCE INC
DOI: 10.1016/j.ins.2022.08.061

关键词

Time-varying Lyapunov equation; Finite-time convergence; Gradient neural network method; Bounded additive time-varying noise; Dynamic neural network

资金

  1. Natural Science Foundation of Guangdong Province [2022A1515010976]
  2. Young Scholar Program of Pazhou Lab [PZL2021KF0022]
  3. Science and Technology Program of Guangzhou [202201010457]

向作者/读者索取更多资源

This paper proposes an improved gradient neural network (IGNN) method by introducing additional nonlinearity to address the issue of inaccurate solution in traditional GNN models when dealing with time-varying Lyapunov equations (LEs). Simulation results demonstrate that the IGNN method achieves finite-time convergence even in the presence of bounded additive time-varying noises.
Dynamic neural networks are efficient for solving algebraic equations. Among them, the gradient neural network (GNN) has the lowest model complexity. The conventional GNN models have exponential convergence when dealing with static Lyapunov equations (LEs), but fail to find the accurate solution when the solutions change with time. To fill this gap, in this paper, we propose an improved GNN method by introducing additional nonlinearity into a traditional GNN model for handling time-varying LEs. It is shown that the proposed improved GNN (IGNN) method has finite-time convergence when it is applied to time-varying LEs, even if there are bounded additive time-varying noises. Simulation results demonstrate the efficacy and advantages of the proposed method over the existing GNN methods and other dynamic neural network methods for time-varying LEs solving. (C) 2022 Elsevier Inc. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据