4.6 Article

Lower complexity bounds of first-order methods for convex-concave bilinear saddle-point problems

Journal

MATHEMATICAL PROGRAMMING
Volume 185, Issue 1-2, Pages 1-35

Publisher

SPRINGER HEIDELBERG
DOI: 10.1007/s10107-019-01420-0

Keywords

Convex optimization; Saddle point problems; First-order methods; Information-based complexity; Lower complexity bound

Ask authors/readers for more resources

This paper explores the lower complexity bounds of first-order methods on large-scale convex-concave bilinear saddle-point problems (SPP). The study shows that first-order methods on affinely constrained problems generally cannot be accelerated to a higher convergence rate than O(1/t) for convex problems, and O(1/t²) is the best possible convergence rate for strongly convex problems. The lower complexity bounds match with established upper complexity bounds in the literature, indicating the optimality of existing first-order methods.
On solving a convex-concave bilinear saddle-point problem (SPP), there have been many works studying the complexity results of first-order methods. These results are all about upper complexity bounds, which can determine at most how many iterations would guarantee a solution of desired accuracy. In this paper, we pursue the opposite direction by deriving lower complexity bounds of first-order methods on large-scale SPPs. Our results apply to the methods whose iterates are in the linear span of past first-order information, as well as more general methods that produce their iterates in an arbitrary manner based on first-order information. We first work on the affinely constrained smooth convex optimization that is a special case of SPP. Different from gradient method on unconstrained problems, we show that first-order methods on affinely constrained problems generally cannot be accelerated from the known convergence rate O(1/t) to O(1/t2), and in addition, O(1/t) is optimal for convex problems. Moreover, we prove that for strongly convex problems, O(1/t2) is the best possible convergence rate, while it is known that gradient methods can have linear convergence on unconstrained problems. Then we extend these results to general SPPs. It turns out that our lower complexity bounds match with several established upper complexity bounds in the literature, and thus they are tight and indicate the optimality of several existing first-order methods.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available