4.7 Article

Robust control under worst-case uncertainty for unknown nonlinear systems using modified reinforcement learning

Journal

Publisher

WILEY
DOI: 10.1002/rnc.4911

Keywords

k-nearest neighbors; double estimator; overestimation; robust reward; state-action space; worst-case uncertainty

Ask authors/readers for more resources

Reinforcement learning (RL) is an effectivemethod for the design of robust controllers of unknown nonlinear systems. Normal RLs for robust control, such as actor-critic (AC) algorithms, depend on the estimation accuracy. Uncertainty in the worst case requires a large state-action space, this causes overestimation and computational problems. In this article, the RL method is modified with the k-nearest neighbor and the double Q-learning algorithm. The modified RL does not need the neural estimator as AC and can stabilize the unknown nonlinear system under the worst-case uncertainty. The convergence property of the proposed RL method is analyzed. The simulations and the experimental results show that our modified RLs are much more robust compared with the classic controllers, such as the proportional-integral-derivative, the sliding mode, and the optimal linear quadratic regulator controllers.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available