4.6 Article

Neural H2 Control Using Continuous-Time Reinforcement Learning

期刊

IEEE TRANSACTIONS ON CYBERNETICS
卷 52, 期 6, 页码 4485-4494

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCYB.2020.3028988

关键词

Continuous-time; H-2 control; neural modeling; reinforcement learning

资金

  1. National Council of Science and Technology (CONACYT) [CONACyT-A1-S-8216]
  2. Center for Research and Advanced Studies (CINVESTAV) [SEP-CINVESTAV-62]

向作者/读者索取更多资源

This article discusses the application of continuous-time H-2 control in unknown nonlinear systems. We use differential neural networks to model the system and apply H-2 tracking control based on the neural model. Due to the sensitivity of neural H-2 control to neural modeling errors, we use reinforcement learning to improve control performance. The stability of neural modeling and H-2 tracking control is proven, and the convergence of the approach is also given. The proposed method is validated with two benchmark control problems.
In this article, we discuss continuous-time H-2 control for the unknown nonlinear system. We use differential neural networks to model the system, then apply the H-2 tracking control based on the neural model. Since the neural H-2 control is very sensitive to the neural modeling error, we use reinforcement learning to improve the control performance. The stabilities of the neural modeling and the H-2 tracking control are proven. The convergence of the approach is also given. The proposed method is validated with two benchmark control problems.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据