4.6 Article

ADAPTIVE SEQUENTIAL SAMPLE AVERAGE APPROXIMATION FOR SOLVING TWO-STAGE STOCHASTIC LINEAR PROGRAMS

Journal

SIAM JOURNAL ON OPTIMIZATION
Volume 31, Issue 1, Pages 1017-1048

Publisher

SIAM PUBLICATIONS
DOI: 10.1137/19M1244469

Keywords

two-stage stochastic programming; sample average approximation; retrospective approximation; sequential sampling

Funding

  1. Office of Naval Research (ONR) [N000141712295]
  2. National Science Foundation [CMMI 1538050]
  3. National Science Foundation (NSF) [CMMI 1854960]
  4. U.S. Department of Defense (DOD) [N000141712295] Funding Source: U.S. Department of Defense (DOD)

Ask authors/readers for more resources

This study introduces an adaptive sequential SAA algorithm to solve large-scale two-stage stochastic linear programs, achieving favorable performance through a sequential framework with optimal sample size schedule and the use of warm starts. Extensive numerical tests demonstrate the success of the proposed algorithm, providing a solution with a probabilistic guarantee on quality.
We present adaptive sequential SAA (sample average approximation) algorithms to solve large-scale two-stage stochastic linear programs. The iterative algorithm framework we propose is organized into outer and inner iterations as follows: during each outer iteration, a sample-path problem is implicitly generated using a sample of observations or scenarios, and solved only imprecisely, to within a tolerance that is chosen adaptively, by balancing the estimated statistical error against solution error. The solutions from prior iterations serve as warm starts to aid efficient solution of the (piecewise linear convex) sample-path optimization problems generated on subsequent iterations. The generated scenarios can be independent and identically distributed, or dependent, as in Monte Carlo generation using Latin-hypercube sampling, antithetic variates, or randomized quasi-Monte Carlo. We first characterize the almost-sure convergence (and convergence in mean) of the optimality gap and the distance of the generated stochastic iterates to the true solution set. We then characterize the corresponding iteration complexity and work complexity rates as a function of the sample size schedule, demonstrating that the best achievable work complexity rate is Monte Carlo canonical and analogous to the generic O(is an element of(-2)) optimal complexity for nonsmooth convex optimization. We report extensive numerical tests that indicate favorable performance, due primarily to the use of a sequential framework with an optimal sample size schedule, and the use of warm starts. The proposed algorithm can be stopped in finite time to return a solution endowed with a probabilistic guarantee on quality.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available