4.7 Article

Fenchel Dual Gradient Methods for Distributed Convex Optimization Over Time-Varying Networks

Journal

IEEE TRANSACTIONS ON AUTOMATIC CONTROL
Volume 64, Issue 11, Pages 4629-4636

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TAC.2019.2901829

Keywords

Gradient methods; Convergence; Convex functions; Standards; Linear programming; Distributed optimization; Fenchel duality; multi-agent optimization

Funding

  1. National Natural Science Foundation of China [61603254]
  2. Natural Science Foundation of Shanghai [16ZR1422500]

Ask authors/readers for more resources

We develop a family of Fenchel dual gradient methods for solving constrained, strongly convex, but not necessarily smooth multi-agent optimization problems over time-varying networks. The proposed algorithms are constructed on the basis of weighted Fenchel dual gradients and can be implemented in a fully decentralized fashion. We show that the proposed algorithms drive all the agents to both primal and dual optimality at sublinear rates under a standard connectivity condition. Compared with the existing distributed optimization methods that also have convergence rate guarantees over time-varying networks, our algorithms are able to address constrained problems and have better scalability with respect to network size and time for reaching connectivity. The competent performance of the Fenchel dual gradient methods is demonstrated via simulations.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available