4.7 Article

Model-Free Distributed Consensus Control Based on Actor-Critic Framework for Discrete-Time Nonlinear Multiagent Systems

期刊

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TSMC.2018.2883801

关键词

Adaptive dynamic programming (ADP); model-free; multiagent systems (MASs); optimal consensus control; Q-function

资金

  1. National Natural Science Foundation of China [61473316]
  2. Hubei Provincial Natural Science Foundation of China [2017CFA030, 2015CFA010]
  3. 111 Project [B17040]

向作者/读者索取更多资源

Conventionally, as the system's dynamics is known, the optimal consensus control problem relies on solving the coupled Hamilton-Jacobi-Bellman (HJB) equations. In this paper, with the unknown system dynamics being considered, a local Q-function-based adaptive dynamic programming method is put forward to deal with the optimal consensus control problem for unknown discrete-time nonlinear multiagent systems by approximating the solutions of the coupled HJB equations. First, a local Q-function is defined, which considers the local consensus error and the actions of the agent and its neighbors. Using the Q-function, it is convenient to get the derivatives with regard to the weights of the consensus control policies, even without the model of system dynamics. Then, with the defined local Q-function, a distributed policy iteration technique is developed, which is theoretically proved to be convergent to the solutions of the coupled HJB equations. An actor-critic neural network framework for implementing the developed model-free optimal consensus control method is constructed to approximate the local Q-functions and the control policies. Finally, the feasibility and effectiveness of the developed method are verified by a series of simulations.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据