期刊
OPTIMIZATION
卷 72, 期 9, 页码 2287-2309出版社
TAYLOR & FRANCIS LTD
DOI: 10.1080/02331934.2022.2057852
关键词
Elman neural networks; L-2 regularization; gradient method; convergence
In this paper, a novel variant of the algorithm is proposed to improve the generalization performance for Elman neural networks. By controlling the weight growth and preventing over-fitting, rigorous theoretical analysis and experimental verification have been conducted.
In this paper, we propose a novel variant of the algorithm to improve the generalization performance for Elman neural networks (ENN). Here, the weight decay term, also called L-2 regularization, which can effectively control the value of weights excessive growth, also over-fitting phenomenon can be effectively prevented. The main contribution of this work lies in that we have conducted a rigorous theoretical analysis of the proposed approach, i.e. the weak and strong convergence results are obtained. The comparison experiments to the problems of function approximation and classification on the real-world data have been performed to verify the theoretical results.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据