4.6 Article

Incremental subgradient algorithms with dynamic step sizes for separable convex optimizations

Journal

MATHEMATICAL METHODS IN THE APPLIED SCIENCES
Volume 46, Issue 6, Pages 7108-7124

Publisher

WILEY
DOI: 10.1002/mma.8958

Keywords

diminishing step size; dynamic step size; incremental subgradient algorithm; separable convex optimization

Ask authors/readers for more resources

This paper investigates the use of dynamic step sizes in the incremental subgradient algorithm for minimizing the sum of a large number of convex functions. Two modified dynamic step size rules are proposed and their convergence and complexity properties are analyzed. Experimental results show that these two algorithms converge faster and more stably than the previous ones, especially for solving large separable convex optimization problems.
We consider the incremental subgradient algorithm employing dynamic step sizes for minimizing the sum of a large number of component convex functions. The dynamic step size rule was firstly introduced by Goffin and Kiwiel [Math. Program., 1999, 85(1): 207-211] for the subgradient algorithm, soon later, for the incremental subgradient algorithm by Nedic and Bertsekas in [SIAM J. Optim., 2001, 12(1): 109-138]. It was observed experimentally that the incremental approach has been very successful in solving large separable optimizations and that the dynamic step sizes generally have better computational performance than others in the literature. In the present paper, we propose two modified dynamic step size rules for the incremental subgradient algorithm and analyse the convergence and complexity properties of them. At last, the assignment problem is considered and the incremental subgradient algorithms employing different kinds of dynamic step sizes are applied to solve the problem. The computational experiments show that the two modified ones converges dramatically faster and more stable than the corresponding one in [SIAM J. Optim., 2001, 12(1): 109-138]. Particularly, for solving large separable convex optimizations, we strongly recommend the second one (see Algorithm 3.3 in the paper) since it has interesting computational performance and is the simplest one.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available