4.7 Article

Decentralized Dual Proximal Gradient Algorithms for Non-Smooth Constrained Composite Optimization Problems

Journal

IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS
Volume 32, Issue 10, Pages 2594-2605

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/TPDS.2021.3072373

Keywords

Optimization; Signal processing algorithms; Power systems; Machine learning; Machine learning algorithms; Linear programming; Multi-agent systems; Convex optimization; synchronous and asynchronous decentralized algorithms; multi-agent systems; non-smooth constrained composite optimization problems; decentralized machine learning; DSLR problems; DERC problems

Funding

  1. Fundamental Research Funds for the Central Universities [XDJK2019AC001]
  2. Innovation Support Program for Chongqing Overseas Returnees [cx2019005]
  3. National Natural Science Foundation of China [61773321]

Ask authors/readers for more resources

This article focuses on a class of totally non-smooth constrained composite optimization problems over multi-agent systems and presents synchronous and asynchronous decentralized dual proximal gradient algorithms. The theoretical and experimental results demonstrate that the algorithms can achieve the globally optimal solution to the problem.
Decentralized dual methods play significant roles in large-scale optimization, which effectively resolve many constrained optimization problems in machine learning and power systems. In this article, we focus on studying a class of totally non-smooth constrained composite optimization problems over multi-agent systems, where the mutual goal of agents in the system is to optimize a sum of two separable non-smooth functions consisting of a strongly-convex function and another convex (not necessarily strongly-convex) function. Agents in the system conduct parallel local computation and communication in the overall process without leaking their private information. In order to resolve the totally non-smooth constrained composite optimization problem in a fully decentralized manner, we devise a synchronous decentralized dual proximal (SynDe-DuPro) gradient algorithm and its asynchronous version (AsynDe-DuPro) based on the randomized block-coordinate method. Both SynDe-DuPro and AsynDe-DuPro algorithms are theoretically proved to achieve the globally optimal solution to the totally non-smooth constrained composite optimization problem relied on the quasi-Fejer monotone theorem. As a main result, AsynDe-DuPro algorithm attains the globally optimal solution without requiring all agents to be activated at each iteration and thus is more robust than most existing synchronous algorithms. The practicability of the proposed algorithms and correctness of the theoretical findings are demonstrated by the experiments on a constrained Decentralized Sparse Logistic Regression (DSLR) problem in machine learning and a Decentralized Energy Resources Coordination (DERC) problem in power systems.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available