4.7 Article

Distributed Big-Data Optimization via Blockwise Gradient Tracking

期刊

IEEE TRANSACTIONS ON AUTOMATIC CONTROL
卷 66, 期 5, 页码 2045-2060

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TAC.2020.3008713

关键词

Convergence; Approximation algorithms; Cost function; Sun; Minimization; Gradient methods; Big-data optimization; distributed optimization; nonconvex optimization

资金

  1. European Research Council under the European Union's Horizon 2020 Research and Innovation Programme [638992-OPT4SMART]
  2. USA National Science Foundation [CIF 1564044, CIF 1719205, CMMI 1832688]
  3. Army Research Office [W911NF1810238]
  4. U.S. Department of Defense (DOD) [W911NF1810238] Funding Source: U.S. Department of Defense (DOD)

向作者/读者索取更多资源

This study focuses on distributed big-data nonconvex optimization in multiagent networks. A novel distributed solution method is proposed, where agents update one block of the entire decision vector in an uncoordinated fashion to address nonconvexity and reduce communication overhead in large-scale problems. Numerical results demonstrate the effectiveness of the algorithm and highlight the impact of block dimension on communication overhead and convergence speed.
We study distributed big-data nonconvex optimization in multiagent networks. We consider the (constrained) minimization of the sum of a smooth (possibly) nonconvex function, i.e., the agents' sum-utility, plus a convex (possibly) nonsmooth regularizer. Our interest is on big-data problems in which there is a large number of variables to optimize. If treated by means of standard distributed optimization algorithms, these large-scale problems may be intractable due to the prohibitive local computation and communication burden at each node. We propose a novel distributed solution method where, at each iteration, agents update in an uncoordinated fashion only one block of the entire decision vector. To deal with the nonconvexity of the cost function, the novel scheme hinges on successive convex approximation techniques combined with a novel blockwise perturbed push-sum consensus protocol, which is instrumental to perform local block-averaging operations and tracking of gradient averages. Asymptotic convergence to stationary solutions of the nonconvex problem is established. Finally, numerical results show the effectiveness of the proposed algorithm and highlight how the block dimension impacts on the communication overhead and practical convergence speed.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据