3.8 Proceedings Paper

A Model-Based Reinforcement Learning Approach for Robust PID Tuning

期刊

出版社

IEEE
DOI: 10.1109/CDC51059.2022.9993381

关键词

-

向作者/读者索取更多资源

This paper proposes a framework that uses probabilistic inference for learning control (PILCO) to tune PID controllers, and applies it to underactuated mechanical systems. Simulation studies have verified the robust performance of the controller.
Proportional-Integral-Derivative (PID) controller is widely used across various industrial process control applications because of its straightforward implementation. However, it can be challenging to fine-tune the PID parameters in practice to achieve robust performance. The paper proposes a model-based reinforcement learning (RL) framework to tune PID controllers leveraging the probabilistic inference for learning control (PILCO) method. In particular, an optimal policy given by PILCO is transformed into a set of robust PID tuning parameters for underactuated mechanical systems. The robustness of the devised controller is verified with simulation studies for a benchmark cart-pole system under server disturbances and system parameter uncertainties.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据