4.6 Article

Distributed Proximal Gradient Algorithm for Nonconvex Optimization Over Time-Varying Networks

期刊

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCNS.2022.3213706

关键词

Distributed proximal gradient algorithm; multiagent systems; nonconvex optimization; time-varying topology

向作者/读者索取更多资源

This article studies the distributed nonconvex optimization problem with nonsmooth regularization, which has wide applications in decentralized learning, estimation, and control. A distributed proximal gradient algorithm is presented for the nonsmooth nonconvex optimization problem. The algorithm updates local variable estimates with a constant step-size at the cost of multiple consensus steps, achieving consensus and convergence to the set of critical points.
This article studies the distributed nonconvex optimization problem with nonsmooth regularization, which has wide applications in decentralized learning, estimation, and control. The objective function is the sum of local objective functions, which consist of differentiable (possibly nonconvex) cost functions and nonsmooth convex functions. This article presents a distributed proximal gradient algorithm for the nonsmooth nonconvex optimization problem. Over time-varying multiagent networks, the proposed algorithm updates local variable estimates with a constant step-size at the cost of multiple consensus steps, where the number of communication rounds increases over time. We prove that the generated local variables achieve consensus and converge to the set of critical points. Finally, we verify the efficiency of the proposed algorithm by numerical simulations.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据