4.7 Article

Learning ReLU Networks on Linearly Separable Data: Algorithm, Optimality, and Generalization

Journal

IEEE TRANSACTIONS ON SIGNAL PROCESSING
Volume 67, Issue 9, Pages 2357-2370

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TSP.2019.2904921

Keywords

Deep learning; stochastic gradient descent; global optimality; escaping local minima; generalization

Funding

  1. National Science Foundation [1500713, 1514056, 1505970, 1711471]
  2. National Natural Science Foundation of China [U1509215, 61621063]
  3. Program for Changjiang Scholars and Innovative Research Team in University [IRT1208]
  4. Direct For Computer & Info Scie & Enginr
  5. Division of Computing and Communication Foundations [1514056] Funding Source: National Science Foundation
  6. Division of Computing and Communication Foundations
  7. Direct For Computer & Info Scie & Enginr [1505970] Funding Source: National Science Foundation

Ask authors/readers for more resources

Neural networks with rectified linear unit (ReLU) activation functions (a.k.a. ReLU networks) have achieved great empirical success in various domains. Nonetheless, existing results for learning ReLU networks either pose assumptions on the underlying data distribution being, e.g., Gaussian, or require the network size and/or training size to be sufficiently large. In this context, the problem of learning a two-layer ReLU network is approached in a binary classification setting, where the data are linearly separable and a hinge loss criterion is adopted. Leveraging the power of random noise perturbation, this paper presents a novel stochastic gradient descent (SGD) algorithm, which can provably train any single-hidden-layer ReLU network to attain global optimality, despite the presence of infinitely many bad local minima, maxima, and saddle points in general. This result is the first of its kind, requiring no assumptions on the data distribution, training/network size, or initialization. Convergence of the resultant iterative algorithm to a global minimum is analyzed by establishing both an upper bound and a lower hound on the number of non-zero updates to be performed. Moreover, generalization guarantees are developed for ReLU networks trained with the novel SGD leveraging classic compression bounds. These guarantees highlight a key difference (at least in the worst case) between reliably learning a ReLU network as well as a leaky ReLU network in terms of sample complexity. Numerical tests using both synthetic data and real images validate the effectiveness of the algorithm and the practical merits of the theory.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available