Journal
NEURAL PROCESSING LETTERS
Volume 55, Issue 2, Pages 1663-1679Publisher
SPRINGER
DOI: 10.1007/s11063-022-10956-w
Keywords
Feedforward neural networks; Batch gradient method; Smoothing Group L-0 regularization; Convergence
Categories
Ask authors/readers for more resources
In this paper, a batch gradient training method with smoothing Group L-0 regularization (BGSGL(0)) is proposed for pruning neural networks. BGSGL(0) overcomes the NP-hard nature of L-0 regularization and prunes the network from the neuron level.
L-0 regularization is an ideal pruning method for neural networks as it can generate the sparsest results of all L-p regularization method. However, the solving of L-0 regularization is an NP-hard problem, and the existing training algorithm with L-0 regularization can only prune the networks weights, but not neurons. To this end, in this paper we propose a batch gradient training method with smoothing Group L-0 regularization (BGSGL(0)). BGSGL(0) not only overcomes the NP-hard nature of the L-0 regularizer, but also prunes the network from the neuron level. The working mechanism for BGSGL(0) to prune hidden neurons is analysed, and the convergence is theoretically established under mild conditions. Simulation results are provided to validate the theoretical finding and the the superiority of the proposed algorithm.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available