期刊
MATHEMATICAL PROGRAMMING
卷 176, 期 1-2, 页码 497-544出版社
SPRINGER HEIDELBERG
DOI: 10.1007/s10107-018-01357-w
关键词
-
类别
资金
- USA National Science Foundation [CIF 1564044, CIF 1719205]
- Office of Naval Research [N00014-16-1-2244]
- Army Research Office [W911NF1810238]
This paper considers nonconvex distributed constrained optimization over networks, modeled as directed (possibly time-varying) graphs. We introduce the first algorithmic framework for the minimization of the sum of a smooth nonconvex (nonseparable) functionthe agent's sum-utilityplus a difference-of-convex function (with nonsmooth convex part). This general formulation arises in many applications, from statistical machine learning to engineering. The proposed distributed method combines successive convex approximation techniques with a judiciously designed perturbed push-sum consensus mechanism that aims to track locally the gradient of the (smooth part of the) sum-utility. Sublinear convergence rate is proved when a fixed step-size (possibly different among the agents) is employed whereas asymptotic convergence to stationary solutions is proved using a diminishing step-size. Numerical results show that our algorithms compare favorably with current schemes on both convex and nonconvex problems.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据