4.6 Article

Hidden unit specialization in layered neural networks: ReLU vs. sigmoidal activation

Journal

Publisher

ELSEVIER
DOI: 10.1016/j.physa.2020.125517

Keywords

Neural networks; Machine learning; Statistical physics

Funding

  1. Northern Netherlands Region of Smart Factories (RoSF) consortium

Ask authors/readers for more resources

Using concepts from the statistical physics of learning, the study focused on layered neural networks with rectified linear units (ReLU) and compared their training behavior with networks using sigmoid activations. The results revealed qualitative differences in the training behavior between ReLU and sigmoid networks. While sigmoid networks exhibited discontinuous transitions with specialized configurations coexisting and competing, ReLU networks showed continuous transitions for all hidden unit numbers. The findings were confirmed through Monte Carlo simulations of the training processes.
By applying concepts from the statistical physics of learning, we study layered neural networks of rectified linear units (ReLU). The comparison with conventional, sigmoidal activation functions is in the center of interest. We compute typical learning curves for large shallow networks with K hidden units in matching student teacher scenarios. The systems undergo phase transitions, i.e. sudden changes of the generalization performance via the process of hidden unit specialization at critical sizes of the training set. Surprisingly, our results show that the training behavior of ReLU networks is qualitatively different from that of networks with sigmoidal activations. In networks with K >= 3 sigmoidal hidden units, the transition is discontinuous: Specialized network configurations co-exist and compete with states of poor performance even for very large training sets. On the contrary, the use of ReLU activations results in continuous transitions for all K. For large enough training sets, two competing, differently specialized states display similar generalization abilities, which coincide exactly for large hidden layers in the limit K -> infinity. Our findings are also confirmed in Monte Carlo simulations of the training processes. (C) 2020 The Author(s). Published by Elsevier B.V.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available