4.7 Article

Asynchronous Distributed ADMM for Large-Scale Optimization-Part I: Algorithm and Convergence Analysis

期刊

IEEE TRANSACTIONS ON SIGNAL PROCESSING
卷 64, 期 12, 页码 3118-3130

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TSP.2016.2537271

关键词

Distributed optimization; ADMM; asynchronous; consensus optimization

资金

  1. NSFC, China [61571385]
  2. NFS [CCF-1526078]
  3. AFOSR [15RT0767]
  4. Shanghai YangFan [15YF1403400]
  5. NSFC [11501210]
  6. Direct For Computer & Info Scie & Enginr
  7. Division of Computing and Communication Foundations [1526078] Funding Source: National Science Foundation

向作者/读者索取更多资源

Aiming at solving large-scale optimization problems, this paper studies distributed optimization methods based on the alternating direction method of multipliers (ADMM). By formulating the optimization problem as a consensus problem, the ADMM can be used to solve the consensus problem in a fully parallel fashion over a computer network with a star topology. However, traditional synchronized computation does not scale well with the problem size, as the speed of the algorithm is limited by the slowest workers. This is particularly true in a heterogeneous network where the computing nodes experience different computation and communication delays. In this paper, we propose an asynchronous distributed ADMM (AD-ADMM), which can effectively improve the time efficiency of distributed optimization. Our main interest lies in analyzing the convergence conditions of the AD-ADMM, under the popular partially asynchronous model, which is defined based on a maximum tolerable delay of the network. Specifically, by considering general and possibly non-convex cost functions, we show that the AD-ADMM is guaranteed to converge to the set of Karush-Kuhn-Tucker (KKT) points as long as the algorithm parameters are chosen appropriately according to the network delay. We further illustrate that the asynchrony of the ADMM has to be handled with care, as slightly modifying the implementation of the AD-ADMM can jeopardize the algorithm convergence, even under the standard convex setting.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据