4.6 Article

Batch gradient training method with smoothing regularization for l0 feedforward neural networks

Journal

NEURAL COMPUTING & APPLICATIONS
Volume 26, Issue 2, Pages 383-390

Publisher

SPRINGER LONDON LTD
DOI: 10.1007/s00521-014-1730-x

Keywords

Feedforward neural networks; Gradient method; l(0) Regularization; Sparsity; Convergence

Funding

  1. National Natural Science Foundation of China [61101228]
  2. China Postdoctoral Science Foundation [2012M520623]

Ask authors/readers for more resources

This paper considers the batch gradient method with the smoothing regularization (BGSL0) for training and pruning feedforward neural networks. We show why BGSL0 can produce sparse weights, which are crucial for pruning networks. We prove both the weak convergence and strong convergence of BGSL0 under mild conditions. The decreasing monotonicity of the error functions during the training process is also obtained. Two examples are given to substantiate the theoretical analysis and to show the better sparsity of BGSL0 than three typical regularization methods.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available