Journal
MATHEMATICAL PROGRAMMING
Volume 176, Issue 1-2, Pages 497-544Publisher
SPRINGER HEIDELBERG
DOI: 10.1007/s10107-018-01357-w
Keywords
-
Categories
Funding
- USA National Science Foundation [CIF 1564044, CIF 1719205]
- Office of Naval Research [N00014-16-1-2244]
- Army Research Office [W911NF1810238]
Ask authors/readers for more resources
This paper considers nonconvex distributed constrained optimization over networks, modeled as directed (possibly time-varying) graphs. We introduce the first algorithmic framework for the minimization of the sum of a smooth nonconvex (nonseparable) functionthe agent's sum-utilityplus a difference-of-convex function (with nonsmooth convex part). This general formulation arises in many applications, from statistical machine learning to engineering. The proposed distributed method combines successive convex approximation techniques with a judiciously designed perturbed push-sum consensus mechanism that aims to track locally the gradient of the (smooth part of the) sum-utility. Sublinear convergence rate is proved when a fixed step-size (possibly different among the agents) is employed whereas asymptotic convergence to stationary solutions is proved using a diminishing step-size. Numerical results show that our algorithms compare favorably with current schemes on both convex and nonconvex problems.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available