4.4 Article

Why Does Deep and Cheap Learning Work So Well?

期刊

JOURNAL OF STATISTICAL PHYSICS
卷 168, 期 6, 页码 1223-1247

出版社

SPRINGER
DOI: 10.1007/s10955-017-1836-5

关键词

Artificial neural networks; Deep learning; Statistical physics

资金

  1. Foundational Questions Institute
  2. Rothberg Family Fund for Cognitive Science
  3. NSF [1122374]

向作者/读者索取更多资源

We show how the success of deep learning could depend not only on mathematics but also on physics: although well-known mathematical theorems guarantee that neural networks can approximate arbitrary functions well, the class of functions of practical interest can frequently be approximated through cheap learning with exponentially fewer parameters than generic ones. We explore how properties frequently encountered in physics such as symmetry, locality, compositionality, and polynomial log-probability translate into exceptionally simple neural networks. We further argue that when the statistical process generating the data is of a certain hierarchical form prevalent in physics and machine learning, a deep neural network can be more efficient than a shallow one. We formalize these claims using information theory and discuss the relation to the renormalization group. We prove various no-flattening theorems showing when efficient linear deep networks cannot be accurately approximated by shallow ones without efficiency loss; for example, we show that n variables cannot be multiplied using fewer than neurons in a single hidden layer.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.4
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据