3.8 Proceedings Paper

A Model-Based Reinforcement Learning Approach for Robust PID Tuning

Journal

Publisher

IEEE
DOI: 10.1109/CDC51059.2022.9993381

Keywords

-

Ask authors/readers for more resources

This paper proposes a framework that uses probabilistic inference for learning control (PILCO) to tune PID controllers, and applies it to underactuated mechanical systems. Simulation studies have verified the robust performance of the controller.
Proportional-Integral-Derivative (PID) controller is widely used across various industrial process control applications because of its straightforward implementation. However, it can be challenging to fine-tune the PID parameters in practice to achieve robust performance. The paper proposes a model-based reinforcement learning (RL) framework to tune PID controllers leveraging the probabilistic inference for learning control (PILCO) method. In particular, an optimal policy given by PILCO is transformed into a set of robust PID tuning parameters for underactuated mechanical systems. The robustness of the devised controller is verified with simulation studies for a benchmark cart-pole system under server disturbances and system parameter uncertainties.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available