4.6 Article

A convergence-accelerated Zhang neural network and its solution application to Lyapunov equation

Journal

NEUROCOMPUTING
Volume 193, Issue -, Pages 213-218

Publisher

ELSEVIER
DOI: 10.1016/j.neucom.2016.02.021

Keywords

Nonlinear activation function; Finite-time convergence; Lyapunov equation; Gradient neural network; Zhang neural network

Funding

  1. National Natural Science Foundation of China [61503152, 61563017, 61561022, 61363073, 61363033]
  2. Research Foundation of Education Bureau of Hunan Province, China [15B192, 15C1119]
  3. Research Foundation of Jishou University, China [2015SYJG034, JDLF2015013]

Ask authors/readers for more resources

Lyapunov equation is widely encountered in scientific and engineering fields, and especially used in the control community to analyze the stability of a control system. In this paper, a convergence-accelerated Zhang neural network (CAZNN) is proposed and investigated for solving online Lyapunov equation. Different from the conventional gradient neural network (GNN) and the original Zhang neural network (ZNN), the proposed CAZNN model adopts a sign-bi-power activation function, and thus possesses the best convergence performance. Furthermore, we prove that the CAZNN model can converge to the theoretical solution of Lyapunov equation within finite time, instead of converging exponentially with time. Simulative results also verify the effectiveness and superiority of the CAZNN model for solving online Lyapunov equation, as compared with the GNN model and the ZNN model. (C) 2016 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available