期刊
IEEE TRANSACTIONS ON CYBERNETICS
卷 53, 期 5, 页码 2818-2828出版社
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCYB.2021.3121078
关键词
Predictive control; Real-time systems; Predictive models; Prediction algorithms; Reinforcement learning; Games; Cost function; Cooperative games; multistep reinforcement learning (RL); policy gradient methods; predictive control
This article presents a model-free predictive control algorithm for real-time systems that improves system performance through data-driven multi-step policy gradient reinforcement learning. By learning from offline and real-time data, the algorithm avoids the need for knowledge of system dynamics in its design and application. Cooperative games are used to model predictive control as multi-agent optimization problems and ensure the optimal predictive control policy. Neural networks are employed to approximate the action-state value function and predictive control policy, with weights determined using weighted residual methods. Numerical results demonstrate the effectiveness of the proposed algorithm.
In this article, a model-free predictive control algorithm for the real-time system is presented. The algorithm is data driven and is able to improve system performance based on multistep policy gradient reinforcement learning. By learning from the offline dataset and real-time data, the knowledge of system dynamics is avoided in algorithm design and application. Cooperative games of the multiplayer in time horizon are presented to model the predictive control as optimization problems of multiagent and guarantee the optimality of the predictive control policy. In order to implement the algorithm, neural networks are used to approximate the action-state value function and predictive control policy, respectively. The weights are determined by using the methods of weighted residual. Numerical results show the effectiveness of the proposed algorithm.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据