期刊
IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS
卷 44, 期 8, 页码 1015-1027出版社
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TSMC.2013.2295351
关键词
Adaptive dynamic programming (ADP); approximate dynamic programming; multiplayer nonzero-sum games; neural networks; neuro-dynamic programming; policy iteration
资金
- National Natural Science Foundation of China [61034002, 61233001, 61273140, 61304086, 61374105]
- Beijing Natural Science Foundation [4132078]
- Early Career Development Award of SKLMCCS
In this paper, we develop an online synchronous approximate optimal learning algorithm based on policy iteration to solve a multiplayer nonzero-sum game without the requirement of exact knowledge of dynamical systems. First, we prove that the online policy iteration algorithm for the nonzero-sum game is mathematically equivalent to the quasi-Newton's iteration in a Banach space. Then, a model neural network is established to identify the unknown continuous-time nonlinear system using input-output data. For each player, a critic neural network and an action neural network are used to approximate its value function and control policy, respectively. Our algorithm only needs to tune the weights of critic neural networks, so there will be less computational complexity during the learning process. All the neural network weights are updated online in real-time, continuously and synchronously. Furthermore, the uniform ultimate bounded stability of the closed-loop system is proved based on Lyapunov approach. Finally, two simulation examples are given to demonstrate the effectiveness of the developed scheme.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据