4.7 Article

A Nesterov-Like Gradient Tracking Algorithm for Distributed Optimization Over Directed Networks

期刊

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TSMC.2019.2960770

关键词

Convergence; Cost function; Convex functions; Acceleration; Delays; Information processing; Directed network; distributed convex optimization; gradient tracking; linear convergence; Nesterov-like algorithm

资金

  1. National Key Research and Development Program of China [2018AAA0100101]
  2. Graduate Scientific Research and Innovation Foundation of Chongqing China [CYB19074]
  3. National Natural Science Foundation of China [61932006, 61772434, 61773321]
  4. Qatar National Research Fund (Qatar Foundation) [NPRP 9-166-1-031]

向作者/读者索取更多资源

This article focuses on dealing with distributed optimization over a directed network using a method called D-DNGT, which integrates gradient tracking into the Nesterov method with nonuniform step-sizes. Numerical experiments show that D-DNGT can converge linearly to the optimal solution under certain conditions.
In this article, we concentrate on dealing with the distributed optimization problem over a directed network, where each unit possesses its own convex cost function and the principal target is to minimize a global cost function (formulated by the average of all local cost functions) while obeying the network connectivity structure. Most of the existing methods, such as push-sum strategy, have eliminated the unbalancedness induced by the directed network via utilizing column-stochastic weights, which may be infeasible if the distributed implementation requires each unit to gain access to (at least) its out-degree information. In contrast, to be suitable for the directed networks with row-stochastic weights, we propose a new directed distributed Nesterov-like gradient tracking algorithm, named as D-DNGT, that incorporates the gradient tracking into the distributed Nesterov method with momentum terms and employs nonuniform step-sizes. D-DNGT extends a number of outstanding consensus algorithms over strongly connected directed networks. The implementation of D-DNGT is straightforward if each unit locally chooses a suitable step-size and privately regulates the weights on information that acquires from in-neighbors. If the largest step-size and the maximum momentum coefficient are positive and small sufficiently, we can prove that D-DNGT converges linearly to the optimal solution provided that the cost functions are smooth and strongly convex. We provide numerical experiments to confirm the findings in this article and contrast D-DNGT with recently proposed distributed optimization approaches.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据