4.6 Article

Understanding Deep Learning (Still) Requires Rethinking Generalization

期刊

COMMUNICATIONS OF THE ACM
卷 64, 期 3, 页码 107-115

出版社

ASSOC COMPUTING MACHINERY
DOI: 10.1145/3446776

关键词

-

向作者/读者索取更多资源

Despite traditional explanations falling short in justifying the excellent generalization of large neural networks, experiments show that state-of-the-art convolutional networks can easily adapt to random labeling during training, indicating a different mechanism contributing to their strong performance in practice.
Despite their massive size, successful deep artificial neural networks can exhibit a remarkably small gap between training and test performance. Conventional wisdom attributes small generalization error either to properties of the model family or to the regularization techniques used during training. Through extensive systematic experiments, we show how these traditional approaches fail to explain why large neural networks generalize well in practice. Specifically, our experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data. This phenomenon is qualitatively unaffected by explicit regularization and occurs even if we replace the true images by completely unstructured random noise. We corroborate these experimental findings with a theoretical construction showing that simple depth two neural networks already have perfect finite sample expressivity as soon as the number of parameters exceeds the number of data points as it usually does in practice. We interpret our experimental findings by comparison with traditional models. We supplement this republication with a new section at the end summarizing recent progresses in the field since the original version of this paper.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据