4.7 Article

Asymptotic Network Independence and Step-Size for a Distributed Subgradient Method

期刊

出版社

MICROTOME PUBL

关键词

Distributed Systems; Convex Optimization; Subgradient Method

向作者/读者索取更多资源

This study investigates whether distributed subgradient methods can achieve linear speedup compared to centralized subgradient methods. The authors demonstrate that when using a specific class of step sizes, distributed subgradient methods exhibit the linear speedup property, regardless of the network or number of nodes.
We consider whether distributed subgradient methods can achieve a linear speedup over a centralized subgradient method. While it might be hoped that distributed network of n nodes that can compute n times more subgradients in parallel compared to a single node might, as a result, be n times faster, existing bounds for distributed optimization methods are often consistent with a slowdown rather than speedup compared to a single node. We show that a distributed subgradient method has this linear speedup property when using a class of square-summable-but-not-summable step-sizes which include 1/t beta when beta is an element of (1/2, 1); for such step-sizes, we show that after a transient period whose size depends on the spectral gap of the network, the method achieves a performance guarantee that does not depend on the network or the number of nodes. We also show that the the optimally decaying step-size 1/root t and, as a consequence, can fail to provide a linear same method can fail to have this asymptotic network independence property under speedup compared to a single node with 1/root t step-size.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据