4.6 Article

Distributed stochastic gradient tracking methods

期刊

MATHEMATICAL PROGRAMMING
卷 187, 期 1-2, 页码 409-457

出版社

SPRINGER HEIDELBERG
DOI: 10.1007/s10107-020-01487-0

关键词

Distributed optimization; Stochastic optimization; Convex programming; Communication networks

资金

  1. NSF [CCF-1717391]
  2. ONR [N00014-16-1-2245]
  3. Shenzhen Research Institute of Big Data (SRIBD) Startup Fund [JCYJ-SP2019090001]

向作者/读者索取更多资源

This paper studies the problem of distributed multi-agent optimization and considers DSGT and GSGT methods. The results show that DSGT has good convergence performance, and when the network is well-connected, GSGT incurs lower communication costs while maintaining similar computational costs.
In this paper, we study the problem of distributed multi-agent optimization over a network, where each agent possesses a local cost function that is smooth and strongly convex. The global objective is to find a common solution that minimizes the average of all cost functions. Assuming agents only have access to unbiased estimates of the gradients of their local cost functions, we consider a distributed stochastic gradient tracking method (DSGT) and a gossip-like stochastic gradient tracking method (GSGT). We show that, in expectation, the iterates generated by each agent are attracted to a neighborhood of the optimal solution, where they accumulate exponentially fast (under a constant stepsize choice). Under DSGT, the limiting (expected) error bounds on the distance of the iterates from the optimal solution decrease with the network size n, which is a comparable performance to a centralized stochastic gradient algorithm. Moreover, we show that when the network is well-connected, GSGT incurs lower communication cost than DSGT while maintaining a similar computational cost. Numerical example further demonstrates the effectiveness of the proposed methods.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据