4.7 Article

Asynchronous Gradient Push

期刊

IEEE TRANSACTIONS ON AUTOMATIC CONTROL
卷 66, 期 1, 页码 168-183

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TAC.2020.2981035

关键词

Convergence; Delays; Optimization; Computational modeling; Protocols; Stochastic processes; Directed graphs; Asynchronous iterative methods; convex optimization; directed graph; distributed optimization

向作者/读者索取更多资源

This study introduces a multiagent framework for distributed optimization, in which each agent accesses a local function and converges to the global minimum using an asynchronous algorithm. Numerical experiments demonstrate that asynchronous gradient push achieves faster global minimization and better scalability with network size compared to synchronous methods.
We consider a multiagent framework for distributed optimization where each agent has access to a local smooth strongly convex function, and the collective goal is to achieve consensus on the parameters that minimize the sum of the agents' local functions. We propose an algorithm wherein each agent operates asynchronously and independently of the other agents. When the local functions are strongly convex with Lipschitz-continuous gradients, we show that the iterates at each agent converge to a neighborhood of the global minimum, where the neighborhood size depends on the degree of asynchrony in the multiagent network. When the agents work at the same rate, convergence to the global minimizer is achieved. Numerical experiments demonstrate that asynchronous gradient push can minimize the global objective faster than the state-of-the-art synchronous first-order methods, is more robust to failing or stalling agents, and scales better with the network size.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据