4.8 Article

A mean field view of the landscape of two-layer neural networks

出版社

NATL ACAD SCIENCES
DOI: 10.1073/pnas.1806579115

关键词

neural networks; stochastic gradient descent; gradient flow; Wasserstein space; partial differential equations

资金

  1. NSF [DMS-1613091, CCF-1714305, IIS-1741162]
  2. Office of Technology Licensing Stanford Graduate Fellowship
  3. William R. Hewlett Stanford Graduate Fellowship

向作者/读者索取更多资源

Multilayer neural networks are among the most powerful models in machine learning, yet the fundamental reasons for this success defy mathematical understanding. Learning a neural network requires optimizing a nonconvex high-dimensional objective (risk function), a problem that is usually attacked using stochastic gradient descent (SGD). Does SGD converge to a global optimum of the risk or only to a local optimum? In the former case, does this happen because local minima are absent or because SGD somehow avoids them? In the latter, why do local minima reached by SGD have good generalization properties? In this paper, we consider a simple case, namely two-layer neural networks, and prove that-in a suitable scaling limit-SGD dynamics is captured by a certain nonlinear partial differential equation (PDE) that we call distributional dynamics (DD). We then consider several specific examples and show how DD can be used to prove convergence of SGD to networks with nearly ideal generalization error. This description allows for averaging out some of the complexities of the landscape of neural networks and can be used to prove a general convergence result for noisy SGD.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据