4.7 Article

A proof of convergence for the gradient descent optimization method with random initializations in the training of neural networks with ReLU activation for piecewise linear target functions

Journal

JOURNAL OF MACHINE LEARNING RESEARCH
Volume 23, Issue -, Pages -

Publisher

MICROTOME PUBL

Keywords

Gradient descent; artificial neural networks; non-convex optimization

Funding

  1. Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) [EXC 2044-390685587]
  2. startup fund project of Shenzhen Research Institute of Big Data [T00120220001]

Ask authors/readers for more resources

In this article, the authors prove that the risk of the considered GD process converges exponentially fast to zero with a positive probability when certain conditions are met. They establish the differentiability and rank conditions for the global minima set, and apply relevant methods to demonstrate the local convergence of the GD optimization method. The research results are significant for the theoretical foundation and optimization algorithms in the field of deep learning.
Gradient descent (GD) type optimization methods are the standard instrument to train artificial neural networks (ANNs) with rectified linear unit (ReLU) activation. Despite the great success of GD type optimization methods in numerical simulations for the training of ANNs with ReLU activation, it remains - even in the simplest situation of the plain vanilla GD optimization method and ANNs with one hidden layer - an open problem to prove (or disprove) the conjecture that the risk of the GD optimization method converges in the training of such ANNs to zero. In this article we establish in the situation where the proba-bility distribution of the input data is equivalent to the continuous uniform distribution on a compact interval, where the probability distribution for the random initialization of the ANN parameters is the standard normal distribution, and where the target function under consideration is continuous and piecewise affine linear that the risk of the considered GD process converges exponentially fast to zero with a positive probability. Roughly speaking, the key ingredients in our mathematical convergence analysis are (i) to prove that suitable sets of global minima of the risk functions are twice continuously differentiable submanifolds of the ANN parameter spaces, (ii) to prove that the Hessians of the risk functions on these sets of global minima satisfy an appropriate maximal rank condition, and, thereafter, (iii) to apply the machinery in [Fehrman, B., Gess, B., Jentzen, A., Convergence rates for the stochastic gradient descent method for non-convex objective functions. J. Mach. Learn. Res. 21(136): 1-48, 2020] to establish local convergence of the GD optimization method. As a consequence, we obtain convergence of the risk to zero as the width of the ANNs, the number of independent random initializations, and the number of GD steps increase to infinity.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available