4.7 Article

A Penalty Alternating Direction Method of Multipliers for Convex Composite Optimization Over Decentralized Networks

Journal

IEEE TRANSACTIONS ON SIGNAL PROCESSING
Volume 69, Issue -, Pages 4282-4295

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TSP.2021.3092347

Keywords

Optimization; Convergence; Convex functions; Signal processing algorithms; Eigenvalues and eigenfunctions; Scientific computing; Optimization methods; Decentralized optimization; alternating direction method of multipliers (ADMM); composite optimization

Funding

  1. National Natural Science Foundation of China [61973324]
  2. Guangdong Basic and Applied Basic Research Foundation [2021B1515020094]
  3. Fundamental Research Funds for the Central Universities
  4. Guangdong Province Key Laboratory of Computational Science [2020B1212060032]
  5. CUHK Research Sustainability of Major RGC Funding Schemes [3133236]

Ask authors/readers for more resources

This paper investigates the problem of minimizing a sum of convex composite functions over a decentralized network. The proposed penalty ADMM method shows sublinear convergence for convex private functions and linear convergence when the smooth parts are strongly convex. Numerical results confirm the theoretical analyses and demonstrate the advantages of PAD over existing state-of-the-art algorithms.
Consider the problem of minimizing a sum of convex composite functions over a decentralized network, where each agent in the network holds a private function consisting of a smooth part and a nonsmooth part, and it can only exchange information with its neighbors during the optimization process. One approach to tackling this problem is to study its penalized approximation. Although such an approximation becomes more accurate as the penalty parameter becomes smaller, it is well known that the penalized objective will also become more ill-conditioned, thereby causing the popular proximal gradient descent method to slow down substantially. To break this accuracy-speed tradeoff, we propose to solve the penalized approximation with the alternating direction method of multipliers (ADMM). We also exploit the composite structure of the private functions by linearizing the smooth parts and handling the nonsmooth parts with proximal operators, which allows us to further reduce the computational costs. The proposed penalty ADMM (abbreviated as PAD) is proven to be sublinearly convergent when the private functions are convex, and linearly convergent when in addition the smooth parts are strongly convex. We present numerical results to corroborate the theoretical analyses and to further demonstrate the advantages of PAD over existing state-of-the-art algorithms such as DL-ADMM, PG-EXTRA, and NIDS.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available