4.5 Article

Convergence analysis for sparse Pi-sigma neural network model with entropy error function

出版社

SPRINGER HEIDELBERG
DOI: 10.1007/s13042-023-01901-x

关键词

Pi-sigma neural network; Entropy error function; L-0 Regularization; Convergence

向作者/读者索取更多资源

This paper proposes a new algorithm for Pi-sigma neural networks with entropy error functions based on L-0 regularization. One of the key features of the proposed algorithm is the use of an entropy error function instead of the more common square error function. The algorithm also employs L-0 regularization to ensure network efficiency. Theoretical analysis and experimental verification prove the monotonicity, strong convergence, and weak convergence of the network. Experiments demonstrate improved performance of the algorithm for classification and regression problems.
As a high-order neural network, the Pi-sigma neural network has demonstrated its capacities for fast learning and strong nonlinear processing. In this paper, a new algorithm is proposed for Pi-sigma neural networks with entropy error functions based on L-0 regularization. One of the key features of the proposed algorithm is the use of an entropy error function instead of the more common square error function, which is different from those in most existing literature. At the same time, the proposed algorithm also employs L-0 regularization as a means of ensuring the efficiency of the network. Based on the gradient method, the monotonicity, and strong and weak convergence of the network are strictly proved by theoretical analysis and experimental verification. Experiments on applying the proposed algorithm to both classification and regression problems have demonstrated the improved performance of the algorithm.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据