4.6 Article

ADAPTIVE SAMPLING STRATEGIES FOR STOCHASTIC OPTIMIZATION

Journal

SIAM JOURNAL ON OPTIMIZATION
Volume 28, Issue 4, Pages 3312-3343

Publisher

SIAM PUBLICATIONS
DOI: 10.1137/17M1154679

Keywords

sample selection; stochastic optimization; machine learning

Funding

  1. Office of Naval Research [N00014-14-1-0313 P00003]
  2. National Science Foundation [DMS-0810213, DMS-1620070]
  3. Department of Energy [DE-FG02-87ER25047]
  4. U.S. Department of Energy (DOE) [DE-FG02-87ER25047] Funding Source: U.S. Department of Energy (DOE)

Ask authors/readers for more resources

In this paper, we propose a stochastic optimization method that adaptively controls the sample size used in the computation of gradient approximations. Unlike other variance reduction techniques that either require additional storage or the regular computation of full gradients, the proposed method reduces variance by increasing the sample size as needed. The decision to increase the sample size is governed by an inner product test that ensures that search directions are descent directions with high probability. We show that the inner product test improves upon the well-known norm test, and can be used as a basis for an algorithm that is globally convergent on nonconvex functions and enjoys a global linear rate of convergence on strongly convex functions. Numerical experiments on logistic regression and nonlinear least squares problems illustrate the performance of the algorithm.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available