4.6 Article

RNN-K: A Reinforced Newton Method for Consensus-Based Distributed Optimization and Control Over Multiagent Systems

期刊

IEEE TRANSACTIONS ON CYBERNETICS
卷 52, 期 5, 页码 4012-4026

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCYB.2020.3011819

关键词

Consensus; distributed optimization; gradient descent; machine learning; Newton method

资金

  1. Local Science and Technology Development Fund Guided by Central Government [2019ZYYD009]
  2. Natural Science Foundation of Hubei Province [2019CFC881]

向作者/读者索取更多资源

The authors propose a reinforced network Newton method with K-order control flexibility (RNN-K) to solve distributed optimization problems. The method integrates consensus strategy and the latest knowledge into local descent direction and accelerates Newton direction's descent by making use of intermediate results. The authors address the difficulty of designing approximated Newton descent in distributed settings by using a special Taylor expansion. The simulation results demonstrate the effectiveness of the method in three types of distributed optimization problems.
With the rise of the processing power of networked agents in the last decade, second-order methods for machine learning have received increasing attention. To solve the distributed optimization problems over multiagent systems, Newton's method has the benefits of fast convergence and high estimation accuracy. In this article, we propose a reinforced network Newton method with K-order control flexibility (RNN-K) in a distributed manner by integrating the consensus strategy and the latest knowledge across the network into local descent direction. The key component of our method is to make the best of intermediate results from the local neighborhood to learn global knowledge, not just for the consensus effect like most existing works, including the gradient descent and Newton methods as well as their refinements. Such a reinforcement enables revitalizing the traditional iterative consensus strategy to accelerate the descent of the Newton direction. The biggest difficulty to design the approximated Newton descent in distributed settings is addressed by using a special Taylor expansion that follows the matrix splitting technique. Based on the truncation on the Taylor series, our method also presents a tradeoff effect between estimation accuracy and computation/communication cost, which provides the control flexibility as a practical consideration. We derive theoretically the sufficient conditions for the convergence of the proposed RNN-K method of at least a linear rate. The simulation results illustrate the performance effectiveness by being applied to three types of distributed optimization problems that arise frequently in machine-learning scenarios.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据