4.7 Article

Stability analysis of stochastic gradient descent for homogeneous neural networks and linear classifiers

期刊

NEURAL NETWORKS
卷 164, 期 -, 页码 382-394

出版社

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.neunet.2023.04.028

关键词

Generalization; Deep learning; Stochastic gradient descent; Stability

向作者/读者索取更多资源

We provide new generalization bounds for stochastic gradient descent in training classifiers with invariances. Our analysis covers both convex and non-convex cases and is based on the stability framework. We investigate angle-wise stability instead of euclidean stability in weights for training and consider an invariant distance measure for neural networks. Moreover, we utilize on-average stability to obtain a data-dependent quantity in the bound, which proves to be more favorable with larger learning rates in our experiments.
We prove new generalization bounds for stochastic gradient descent when training classifiers with invariances. Our analysis is based on the stability framework and covers both the convex case of linear classifiers and the non-convex case of homogeneous neural networks. We analyze stability with respect to the normalized version of the loss function used for training. This leads to investigating a form of angle-wise stability instead of euclidean stability in weights. For neural networks, the measure of distance we consider is invariant to rescaling the weights of each layer. Furthermore, we exploit the notion of on-average stability in order to obtain a data-dependent quantity in the bound. This data-dependent quantity is seen to be more favorable when training with larger learning rates in our numerical experiments. This might help to shed some light on why larger learning rates can lead to better generalization in some practical scenarios.(c) 2023 Elsevier Ltd. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据