4.5 Article

Batch Gradient Training Method with Smoothing Group L0 Regularization for Feedfoward Neural Networks

期刊

NEURAL PROCESSING LETTERS
卷 55, 期 2, 页码 1663-1679

出版社

SPRINGER
DOI: 10.1007/s11063-022-10956-w

关键词

Feedforward neural networks; Batch gradient method; Smoothing Group L-0 regularization; Convergence

向作者/读者索取更多资源

In this paper, a batch gradient training method with smoothing Group L-0 regularization (BGSGL(0)) is proposed for pruning neural networks. BGSGL(0) overcomes the NP-hard nature of L-0 regularization and prunes the network from the neuron level.
L-0 regularization is an ideal pruning method for neural networks as it can generate the sparsest results of all L-p regularization method. However, the solving of L-0 regularization is an NP-hard problem, and the existing training algorithm with L-0 regularization can only prune the networks weights, but not neurons. To this end, in this paper we propose a batch gradient training method with smoothing Group L-0 regularization (BGSGL(0)). BGSGL(0) not only overcomes the NP-hard nature of the L-0 regularizer, but also prunes the network from the neuron level. The working mechanism for BGSGL(0) to prune hidden neurons is analysed, and the convergence is theoretically established under mild conditions. Simulation results are provided to validate the theoretical finding and the the superiority of the proposed algorithm.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据