4.7 Article

A Gradient-Guided Evolutionary Approach to Training Deep Neural Networks

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNNLS.2021.3061630

关键词

Artificial neural networks; Training; Optimization; Search problems; Scalability; Complexity theory; Genetic algorithms; Deep neural networks (DNNs); evolutionary algorithm (EA); genetic operator; gradient descent; sparsity

资金

  1. National Key Research and Development Program of China [2018AAA0100100]
  2. National Natural Science Foundation of China [61672033, 61822301, 61876123, 61906001, 61903178, U1804262, U20A20306]
  3. Hong Kong Scholars Program [XJ2019035]
  4. Anhui Provincial Natural Science Foundation [1808085J06, 1908085QF271]
  5. State Key Laboratory of Synthetical Automation for Process Industries [PAL-N201805]
  6. Research Grants Council of the Hong Kong Special Administrative Region, China [PolyU11202418, PolyU11209219]
  7. Royal Society International Exchanges Program [IEC\NSFC\170279]
  8. EPSRC [EP/M017869/1] Funding Source: UKRI

向作者/读者索取更多资源

This article proposes a gradient-guided evolutionary approach to train neural networks, which combines the advantages of gradient-based methods and evolutionary algorithms, and considers the sparsity of the network, demonstrating its effectiveness in solving large-scale optimization problems.
It has been widely recognized that the efficient training of neural networks (NNs) is crucial to classification performance. While a series of gradient-based approaches have been extensively developed, they are criticized for the ease of trapping into local optima and sensitivity to hyperparameters. Due to the high robustness and wide applicability, evolutionary algorithms (EAs) have been regarded as a promising alternative for training NNs in recent years. However, EAs suffer from the curse of dimensionality and are inefficient in training deep NNs (DNNs). By inheriting the advantages of both the gradient-based approaches and EAs, this article proposes a gradient-guided evolutionary approach to train DNNs. The proposed approach suggests a novel genetic operator to optimize the weights in the search space, where the search direction is determined by the gradient of weights. Moreover, the network sparsity is considered in the proposed approach, which highly reduces the network complexity and alleviates overfitting. Experimental results on single-layer NNs, deep-layer NNs, recurrent NNs, and convolutional NNs (CNNs) demonstrate the effectiveness of the proposed approach. In short, this work not only introduces a novel approach for training DNNs but also enhances the performance of EAs in solving large-scale optimization problems.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据