4.6 Article

Unified field theoretical approach to deep and recurrent neuronal networks

Journal

Publisher

IOP Publishing Ltd
DOI: 10.1088/1742-5468/ac8e57

Keywords

information processing; machine learning; network dynamics; statistical inference

Funding

  1. European Union's Horizon 2020 research and innovation program [945539]
  2. Helmholtz Association Initiative and Networking Fund [SO-092]
  3. German Federal Ministry for Education and Research (BMBF Grant) [01IS19077A]
  4. Excellence Initiative of the German federal government [ERS PF-JARA-SDS005]
  5. Excellence Initiative of the German state government [ERS PF-JARA-SDS005]

Ask authors/readers for more resources

Understanding the capabilities and limitations of different network architectures is crucial for machine learning. Through Bayesian inference and mean-field theory, we can uncover the connections and differences between recurrent and deep networks. The study finds that recurrent networks typically converge slower than deep networks, and the convergence speed depends on the parameters of the weight prior as well as the depth or number of time steps.
Understanding capabilities and limitations of different network architectures is of fundamental importance to machine learning. Bayesian inference on Gaussian processes has proven to be a viable approach for studying recurrent and deep networks in the limit of infinite layer width, n -> infinity. Here we present a unified and systematic derivation of the mean-field theory for both architectures that starts from first principles by employing established methods from statistical physics of disordered systems. The theory elucidates that while the mean-field equations are different with regard to their temporal structure, they yet yield identical Gaussian kernels when readouts are taken at a single time point or layer, respectively. Bayesian inference applied to classification then predicts identical performance and capabilities for the two architectures. Numerically, we find that convergence towards the mean-field theory is typically slower for recurrent networks than for deep networks and the convergence speed depends non-trivially on the parameters of the weight prior as well as the depth or number of time steps, respectively. Our method exposes that Gaussian processes are but the lowest order of a systematic expansion in 1/n and we compute next-to-leading-order corrections which turn out to be architecture-specific. The formalism thus paves the way to investigate the fundamental differences between recurrent and deep architectures at finite widths n.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available