4.7 Article

A Penalty Alternating Direction Method of Multipliers for Convex Composite Optimization Over Decentralized Networks

期刊

IEEE TRANSACTIONS ON SIGNAL PROCESSING
卷 69, 期 -, 页码 4282-4295

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TSP.2021.3092347

关键词

Optimization; Convergence; Convex functions; Signal processing algorithms; Eigenvalues and eigenfunctions; Scientific computing; Optimization methods; Decentralized optimization; alternating direction method of multipliers (ADMM); composite optimization

资金

  1. National Natural Science Foundation of China [61973324]
  2. Guangdong Basic and Applied Basic Research Foundation [2021B1515020094]
  3. Fundamental Research Funds for the Central Universities
  4. Guangdong Province Key Laboratory of Computational Science [2020B1212060032]
  5. CUHK Research Sustainability of Major RGC Funding Schemes [3133236]

向作者/读者索取更多资源

This paper investigates the problem of minimizing a sum of convex composite functions over a decentralized network. The proposed penalty ADMM method shows sublinear convergence for convex private functions and linear convergence when the smooth parts are strongly convex. Numerical results confirm the theoretical analyses and demonstrate the advantages of PAD over existing state-of-the-art algorithms.
Consider the problem of minimizing a sum of convex composite functions over a decentralized network, where each agent in the network holds a private function consisting of a smooth part and a nonsmooth part, and it can only exchange information with its neighbors during the optimization process. One approach to tackling this problem is to study its penalized approximation. Although such an approximation becomes more accurate as the penalty parameter becomes smaller, it is well known that the penalized objective will also become more ill-conditioned, thereby causing the popular proximal gradient descent method to slow down substantially. To break this accuracy-speed tradeoff, we propose to solve the penalized approximation with the alternating direction method of multipliers (ADMM). We also exploit the composite structure of the private functions by linearizing the smooth parts and handling the nonsmooth parts with proximal operators, which allows us to further reduce the computational costs. The proposed penalty ADMM (abbreviated as PAD) is proven to be sublinearly convergent when the private functions are convex, and linearly convergent when in addition the smooth parts are strongly convex. We present numerical results to corroborate the theoretical analyses and to further demonstrate the advantages of PAD over existing state-of-the-art algorithms such as DL-ADMM, PG-EXTRA, and NIDS.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据