4.6 Article

From error bounds to the complexity of first-order descent methods for convex functions

Journal

MATHEMATICAL PROGRAMMING
Volume 165, Issue 2, Pages 471-507

Publisher

SPRINGER HEIDELBERG
DOI: 10.1007/s10107-016-1091-6

Keywords

Error bounds; Convex minimization; Forward-backward method; KL inequality; Complexity of first-order methods; LASSO; Compressed sensing

Funding

  1. Air Force Office of Scientific Research, Air Force Material Command, USAF [FA9550-14-1-0056, FA9550-14-1-0500]
  2. FMJH Program Gaspard Monge in optimization and operations research
  3. ANR GAGA
  4. FONDECYT [1140829]
  5. Basal Project CMM Universidad de Chile
  6. Millenium Nucleus ICM/FIC [RC130003]
  7. Anillo Project [ACT-1106]
  8. ECOS-Conicyt Project [C13E03]
  9. Conicyt [Redes 140183]
  10. MathAmsud Project [15MAT-02]

Ask authors/readers for more resources

This paper shows that error bounds can be used as effective tools for deriving complexity results for first-order descent methods in convex minimization. In a first stage, this objective led us to revisit the interplay between error bounds and the Kurdyka-Aojasiewicz (KL) inequality. One can show the equivalence between the two concepts for convex functions having a moderately flat profile near the set of minimizers (as those of functions with Holderian growth). A counterexample shows that the equivalence is no longer true for extremely flat functions. This fact reveals the relevance of an approach based on KL inequality. In a second stage, we show how KL inequalities can in turn be employed to compute new complexity bounds for a wealth of descent methods for convex problems. Our approach is completely original and makes use of a one-dimensional worst-case proximal sequence in the spirit of the famous majorant method of Kantorovich. Our result applies to a very simple abstract scheme that covers a wide class of descent methods. As a byproduct of our study, we also provide new results for the globalization of KL inequalities in the convex framework. Our main results inaugurate a simple method: derive an error bound, compute the desingularizing function whenever possible, identify essential constants in the descent method and finally compute the complexity using the one-dimensional worst case proximal sequence. Our method is illustrated through projection methods for feasibility problems, and through the famous iterative shrinkage thresholding algorithm (ISTA), for which we show that the complexity bound is of the form where the constituents of the bound only depend on error bound constants obtained for an arbitrary least squares objective with regularization.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available