4.7 Article

Training Two-Layer ReLU Networks with Gradient Descent is Inconsistent

期刊

出版社

MICROTOME PUBL

关键词

Neural networks; consistency; gradient descent; initialization; neural tangent kernel

资金

  1. Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) [EXC 2075-390740016]

向作者/读者索取更多资源

This study proves that two-layer (Leaky)ReLU networks initialized by the widely used method proposed by He et al. (2015) and trained using gradient descent on a least-squares loss are not universally consistent. In certain cases, the network can only find a bad local minimum and essentially performs linear regression, even for non-linear target functions.
We prove that two-layer (Leaky)ReLU networks initialized by e.g. the widely used method proposed by He et al. (2015) and trained using gradient descent on a least-squares loss are not universally consistent. Specifically, we describe a large class of one-dimensional data-generating distributions for which, with high probability, gradient descent only finds a bad local minimum of the optimization landscape, since it is unable to move the biases far away from their initialization at zero. It turns out that in these cases, the found network essentially performs linear regression even if the target function is non-linear. We further provide numerical evidence that this happens in practical situations, for some multi-dimensional distributions and that stochastic gradient descent exhibits similar behavior. We also provide empirical results on how the choice of initialization and optimizer can influence this behavior.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据