4.6 Article

Stochastic gradient descent, weighted sampling, and the randomized Kaczmarz algorithm

期刊

MATHEMATICAL PROGRAMMING
卷 155, 期 1-2, 页码 549-573

出版社

SPRINGER HEIDELBERG
DOI: 10.1007/s10107-015-0864-7

关键词

Distribution reweighting; Importance sampling; Kaczmarz method; Stochastic gradient descent

资金

  1. Simons Foundation Collaboration grant
  2. NSF CAREER [1348721]
  3. Alfred P. Sloan Fellowship
  4. Google Research Award
  5. ONR [N00014-12-1-0743]
  6. AFOSR Young Investigator Program Award
  7. NSF CAREER award

向作者/读者索取更多资源

We obtain an improved finite-sample guarantee on the linear convergence of stochastic gradient descent for smooth and strongly convex objectives, improving from a quadratic dependence on the conditioning (where is a bound on the smoothness and on the strong convexity) to a linear dependence on . Furthermore, we show how reweighting the sampling distribution (i.e. importance sampling) is necessary in order to further improve convergence, and obtain a linear dependence in the average smoothness, dominating previous results. We also discuss importance sampling for SGD more broadly and show how it can improve convergence also in other scenarios. Our results are based on a connection we make between SGD and the randomized Kaczmarz algorithm, which allows us to transfer ideas between the separate bodies of literature studying each of the two methods. In particular, we recast the randomized Kaczmarz algorithm as an instance of SGD, and apply our results to prove its exponential convergence, but to the solution of a weighted least squares problem rather than the original least squares problem. We then present a modified Kaczmarz algorithm with partially biased sampling which does converge to the original least squares solution with the same exponential convergence rate.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据