Journal
NEURAL PROCESSING LETTERS
Volume 46, Issue 1, Pages 195-217Publisher
SPRINGER
DOI: 10.1007/s11063-017-9581-y
Keywords
Recurrent neural network; Drazin inverse; Dynamic equation; Activation function
Categories
Funding
- Headmaster Foundation of Hexi University [XZ2014-18]
- University research funding projects in Gansu province [2014A-110]
- National Natural Science Foundation of China [11171371, 11461020, 11401143]
- Oversea Returning Foundation of Hei Long Jiang Province [LC201402]
- Scientific Research Foundation of Hei Long Jiang Province Education Department [12541232]
- Serbian Ministry of Science [174013]
Ask authors/readers for more resources
Four gradient-based recurrent neural networks for computing the Drazin inverse of a square real matrix are developed. Theoretical analysis shows that any monotonically-increasing odd activation function ensures the global convergence performance of defined neural network models. The computer simulation results further substantiate that the considered neural networks could compute the Drazin inverse with accuracy and effectiveness. Moreover, the presented neural networks show superior convergence in the case when the power-sigmoid activation functions are used compared to linear models.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available