4.6 Article

Convolutional neural networks based on fractional-order momentum for parameter training

期刊

NEUROCOMPUTING
卷 449, 期 -, 页码 85-99

出版社

ELSEVIER
DOI: 10.1016/j.neucom.2021.03.075

关键词

Convolutional neural networks; Fractional-order difference; Momentum; MNIST; CIFAR-10

资金

  1. Scientific Research Fund of Liaoning Provincial Education Department, China [LJC202010]
  2. Liaoning Revitalization Talents Program [XLYC1807229]
  3. Natural Science Foundation of Liaoning Province, China [20180520009]
  4. Liaoning University Science Research Fund [LDGY2019020]

向作者/读者索取更多资源

This paper proposes a parameter training method via the fractional-order momentum for convolutional neural networks (CNNs), which can update parameters more smoothly and improve the flexibility and adaptive ability of CNN parameters. Experimental results demonstrate that this method can enhance the recognition accuracy and learning convergence speed of CNNs.
This paper proposes a parameter training method via the fractional-order momentum for convolutional neural networks (CNNs). To update the parameters of CNNs more smoothly, the parameter training method via the fractional-order momentum is proposed based on the Gr & uuml;nwald-Letnikov (G-L) difference operation. The stochastic classical momentum (SCM) algorithm and adaptive moment (Adam) estimation algorithm are improved by replacing the integer-order difference with the fractional-order difference. Meanwhile, the linear and the nonlinear methods are discussed to adjust the fractional-order. Therefore, the proposed methods can improve the flexibility and the adaptive ability of CNN parameters. We analyze the validity of the methods by using MNIST dataset and CIFAR-10 dataset, and the experimental results show that the proposed methods can improve the recognition accuracy and the learning convergence speed of CNNs compared with the traditional SCM and Adam methods. (c) 2021 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据