4.6 Article

Deep reinforcement learning with shallow controllers: An experimental application to PID tuning

Journal

CONTROL ENGINEERING PRACTICE
Volume 121, Issue -, Pages -

Publisher

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.conengprac.2021.105046

Keywords

Reinforcement learning; Deep learning; PID control; Process control; Process systems engineering

Funding

  1. Natural Sciences and Engineering Research Council of Canada (NSERC)
  2. Honeywell Connected Plant

Ask authors/readers for more resources

Deep reinforcement learning is an optimization-driven framework for controlling dynamical systems. Implementing deep RL algorithms on real physical systems presents challenges such as interplay between software and hardware, experiment design and sample efficiency, training subject to input constraints, and interpretability of the algorithm and control law. Using a PID controller as the trainable RL policy offers simplicity and a well-understood form.
Deep reinforcement learning (RL) is an optimization-driven framework for producing control strategies for general dynamical systems without explicit reliance on process models. Good results have been reported in simulation. Here we demonstrate the challenges in implementing a state of the art deep RL algorithm on a real physical system. Aspects include the interplay between software and existing hardware; experiment design and sample efficiency; training subject to input constraints; and interpretability of the algorithm and control law. At the core of our approach is the use of a PID controller as the trainable RL policy. In addition to its simplicity, this approach has several appealing features: No additional hardware needs to be added to the control system, since a PID controller can easily be implemented through a standard programmable logic controller; the control law can easily be initialized in a safe'' region of the parameter space; and the final product-a well-tuned PID controller-has a form that practitioners can reason about and deploy with confidence.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available