4.6 Article

Neural H2 Control Using Continuous-Time Reinforcement Learning

Journal

IEEE TRANSACTIONS ON CYBERNETICS
Volume 52, Issue 6, Pages 4485-4494

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCYB.2020.3028988

Keywords

Continuous-time; H-2 control; neural modeling; reinforcement learning

Funding

  1. National Council of Science and Technology (CONACYT) [CONACyT-A1-S-8216]
  2. Center for Research and Advanced Studies (CINVESTAV) [SEP-CINVESTAV-62]

Ask authors/readers for more resources

This article discusses the application of continuous-time H-2 control in unknown nonlinear systems. We use differential neural networks to model the system and apply H-2 tracking control based on the neural model. Due to the sensitivity of neural H-2 control to neural modeling errors, we use reinforcement learning to improve control performance. The stability of neural modeling and H-2 tracking control is proven, and the convergence of the approach is also given. The proposed method is validated with two benchmark control problems.
In this article, we discuss continuous-time H-2 control for the unknown nonlinear system. We use differential neural networks to model the system, then apply the H-2 tracking control based on the neural model. Since the neural H-2 control is very sensitive to the neural modeling error, we use reinforcement learning to improve the control performance. The stabilities of the neural modeling and the H-2 tracking control are proven. The convergence of the approach is also given. The proposed method is validated with two benchmark control problems.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available