4.6 Article

L1/2 regularization learning for smoothing interval neural networks: Algorithms and convergence analysis

Journal

NEUROCOMPUTING
Volume 272, Issue -, Pages 122-129

Publisher

ELSEVIER SCIENCE BV
DOI: 10.1016/j.neucom.2017.06.061

Keywords

Interval neural network; Interval computation; Batch gradient algorithm; Smoothing L-1/2 regularization; Convergence

Funding

  1. NSFC [61403056, 61573387]
  2. GuangDong Program [2015B010105005]
  3. Natural Science Foundation Guidance Project of Liaoning Province [201602050]
  4. GuangZhou Program [201508010032]

Ask authors/readers for more resources

Interval neural networks can easily address uncertain information, since they are capable of handling various kinds of uncertainties inherently which are represented by interval. L-q (0 < q < 1) regularization was proposed after L-1 regularization for better solution of sparsity problems, among which L-1/2 is of extreme importance and can be taken as a representative. However, weights oscillation might occur during learning process due to discontinuous derivative for L-1/2 regularization. In this paper, a novel batch gradient algorithm with smoothing L-1/2 regularization is proposed to prevent the weights oscillation for a smoothing interval neural network (SINN), which is the modified interval neural network. Here, by smoothing we mean that, in a neighborhood of the origin, we replace the absolute values of the weights by a smooth function for continuous derivative. Compared with conventional gradient learning algorithm with L-1/2 regularization, this approach can obtain sparser weights and simpler structure, and improve the learning efficiency. Then we present a sufficient condition for convergence of SINN. Finally, simulation results illustrate the convergence of the main results. (C) 2017 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available