4.7 Article

Statistical Mechanics of Deep Linear Neural Networks: The Backpropagating Kernel Renormalization

Journal

PHYSICAL REVIEW X
Volume 11, Issue 3, Pages -

Publisher

AMER PHYSICAL SOC
DOI: 10.1103/PhysRevX.11.031059

Keywords

Statistical Physics

Ask authors/readers for more resources

This study examines the statistical mechanics of learning in deep linear neural networks, shedding light on their nonlinear learning properties and essential network characteristics such as generalization error, network width and depth, and training set size. By introducing backpropagating kernel renormalization and incremental integration of weights layer by layer, important network properties are accurately evaluated, offering insights into the emergent properties of neural representations across hidden layers.
The groundbreaking success of deep learning in many real-world tasks has triggered an intense effort to theoretically understand the power and limitations of deep learning in the training and generalization of complex tasks, so far with limited progress. In this work, we study the statistical mechanics of learning in deep linear neural networks (DLNNs) in which the input-output function of an individual unit is linear. Despite the linearity of the units, learning in DLNNs is highly nonlinear; hence, studying its properties reveals some of the essential features of nonlinear deep neural networks (DNNs). Importantly, we exactly solve the network properties following supervised learning using an equilibrium Gibbs distribution in the weight space. To do this, we introduce the backpropagating kernel renormalization (BPKR), which allows for the incremental integration of the network weights layer by layer starting from the network output layer and progressing backward until the first layer's weights are integrated out. This procedure allows us to evaluate important network properties, such as its generalization error, the role of network width and depth, the impact of the size of the training set, and the effects of weight regularization and learning stochasticity. BPKR does not assume specific statistics of the input or the task's output. Furthermore, by performing partial integration of the layers, the BPKR allows us to compute the emergent properties of the neural representations across the different hidden layers. We propose a heuristic extension of the BPKR to nonlinear DNNs with rectified linear units (ReLU). Surprisingly, our numerical simulations reveal that despite the nonlinearity, the predictions of our theory are largely shared by ReLU networks of modest depth, in a wide regime of parameters. Our work is the first exact statistical mechanical study of learning in a family of deep neural networks, and the first successful theory of learning through the successive integration of degrees of freedom in the learned weight space.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available