4.5 Article

Batch Gradient Training Method with Smoothing Group L0 Regularization for Feedfoward Neural Networks

Related references

Note: Only part of the references are listed.
Article Computer Science, Artificial Intelligence

Convergence and objective functions of noise-injected multilayer perceptrons with hidden multipliers

Xiangyu Wang et al.

Summary: This paper presents a network structure called MLPHM, where each hidden node is associated with a tunable gate multiplier, leading to a pruned network. A noise-injected training scheme is proposed to improve fault tolerance and generalization ability, with objective functions and convergence theorems obtained and verified through simulations. Applications to UCI datasets show efficient pruning and superior generalization ability of the proposed algorithms.

NEUROCOMPUTING (2021)

Article Computer Science, Artificial Intelligence

Feature Selection Using a Neural Network With Group Lasso Regularization and Controlled Redundancy

Jian Wang et al.

Summary: This study presents a neural network-based feature selection scheme that controls the level of redundancy in selected features by integrating two penalties. Experimental results demonstrate the effectiveness of the proposed scheme in redundancy control.

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS (2021)

Article Computer Science, Information Systems

Deterministic convergence analysis via smoothing group Lasso regularization and adaptive momentum for Sigma-Pi-Sigma neural network

Qian Kang et al.

Summary: The paper proposes a sparse and accelerated method for Sigma-Pi-Sigma neural network training, utilizing smoothing group lasso regularization and adaptive momentum to efficiently sparsity the network structure and speed up learning convergence. Theoretical analysis and numerical experiments demonstrate the effectiveness of the new algorithm.

INFORMATION SCIENCES (2021)

Article Computer Science, Information Systems

Convergence of a Gradient-Based Learning Algorithm With Penalty for Ridge Polynomial Neural Networks

Qinwei Fan et al.

Summary: This study introduces a regularization model with Group Lasso penalty to enhance convergence speed and generalization ability of the network. The use of a smoothing function to approximate the Group Lasso penalty overcomes numerical oscillation and convergence analysis challenges. The efficiency of the proposed algorithm is demonstrated through numerical experiments and compared with various regularizers, supported by theoretical analysis.

IEEE ACCESS (2021)

Article Automation & Control Systems

Learning Optimized Structure of Neural Networks by Hidden Node Pruning With L1 Regularization

Xuetao Xie et al.

IEEE TRANSACTIONS ON CYBERNETICS (2020)

Article Computer Science, Artificial Intelligence

Feature Selection for Neural Networks Using Group Lasso Regularization

Huaqing Zhang et al.

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING (2020)

Article Computer Science, Artificial Intelligence

Deterministic convergence of complex mini-batch gradient learning algorithm for fully complex-valued neural networks

Huisheng Zhang et al.

NEUROCOMPUTING (2020)

Article Automation & Control Systems

Weight Noise Injection-Based MLPs With Group Lasso Penalty: Asymptotic Convergence and Application to Node Pruning

Jian Wang et al.

IEEE TRANSACTIONS ON CYBERNETICS (2019)

Article Computer Science, Information Systems

Group L1/2 Regularization for Pruning Hidden Layer Nodes of Feedforward Neural Networks

Habtamu Zegeye Alemu et al.

IEEE ACCESS (2019)

Proceedings Paper Computer Science, Artificial Intelligence

L0 Regularization based Fine-grained Neural Network Pruning Method

Qixin Xie et al.

PROCEEDINGS OF THE 11TH INTERNATIONAL CONFERENCE ON ELECTRONICS, COMPUTERS AND ARTIFICIAL INTELLIGENCE (ECAI-2019) (2019)

Article Computer Science, Artificial Intelligence

L1/2 regularization learning for smoothing interval neural networks: Algorithms and convergence analysis

Dakun Yang et al.

NEUROCOMPUTING (2018)

Article Computer Science, Artificial Intelligence

Smooth group L-1/2 regularization for input layer of feedforward neural networks

Feng Li et al.

NEUROCOMPUTING (2018)

Article Computer Science, Artificial Intelligence

A Novel Pruning Algorithm for Smoothing Feedforward Neural Networks Based on Group Lasso Method

Jian Wang et al.

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS (2018)

Article Computer Science, Information Systems

Convergence analyses on sparse feedforward neural networks via group lasso regularization

Jian Wang et al.

INFORMATION SCIENCES (2017)

Article Computer Science, Artificial Intelligence

Group sparse regularization for deep neural networks

Simone Scardapane et al.

NEUROCOMPUTING (2017)

Article Computer Science, Artificial Intelligence

Online gradient method with smoothing l0 regularization for feedforward neural networks

Huisheng Zhang et al.

NEUROCOMPUTING (2017)

Article Computer Science, Information Systems

Input Layer Regularization of Multilayer Feedforward Neural Networks

Feng Li et al.

IEEE ACCESS (2017)

Article Computer Science, Information Systems

Alternating Iteration for I-p (0 < p <= 1) Regularized CT Reconstruction

Chuang Miao et al.

IEEE ACCESS (2016)

Article Computer Science, Artificial Intelligence

Batch gradient training method with smoothing regularization for l0 feedforward neural networks

Huisheng Zhang et al.

NEURAL COMPUTING & APPLICATIONS (2015)

Article Computer Science, Artificial Intelligence

Batch gradient method with smoothing L1/2 regularization for training of feedforward neural networks

Wei Wu et al.

NEURAL NETWORKS (2014)

Article Computer Science, Artificial Intelligence

Boundedness and convergence of batch back-propagation algorithm with penalty for feedforward neural networks

Huisheng Zhang et al.

NEUROCOMPUTING (2012)

Article Computer Science, Artificial Intelligence

L1/2 Regularization: A Thresholding Representation Theory and a Fast Solver

Zongben Xu et al.

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS (2012)

Article Computer Science, Information Systems

On a Variational Norm Tailored to Variable-Basis Approximation Schemes

Giorgio Gnecco et al.

IEEE TRANSACTIONS ON INFORMATION THEORY (2011)

Article Computer Science, Artificial Intelligence

Convergence analysis of online gradient method for BP neural networks

Wei Wu et al.

NEURAL NETWORKS (2011)

Article Computer Science, Artificial Intelligence

A Novel Pruning Algorithm for Optimizing Feedforward Neural Network of Classification Problems

M. Gethsiyal Augasta et al.

NEURAL PROCESSING LETTERS (2011)

Article Computer Science, Artificial Intelligence

Deterministic convergence of conjugate gradient method for feedforward neural networks

Jian Wang et al.

NEUROCOMPUTING (2011)

Article Computer Science, Information Systems

L (1/2) regularization

Xu ZongBen et al.

SCIENCE CHINA-INFORMATION SCIENCES (2010)

Article Computer Science, Artificial Intelligence

Hidden neuron pruning of multilayer perceptrons using a quantified sensitivity measure

XQ Zeng et al.

NEUROCOMPUTING (2006)

Article Computer Science, Information Systems

Bounds on rates of variable-basis and neural-network approximation

V Kurková et al.

IEEE TRANSACTIONS ON INFORMATION THEORY (2001)