Journal
NEURAL PROCESSING LETTERS
Volume 53, Issue 3, Pages 1709-1722Publisher
SPRINGER
DOI: 10.1007/s11063-019-10127-4
Keywords
Reinforcement learning; Deep learning; Neural network; Control theory
Categories
Ask authors/readers for more resources
This paper proposes a Proportional-Integral (PI) neural network architecture that combines integral control and linear control, further improving the sample efficiency and training performance on most RL tasks. Experimental results demonstrate that the proposed architecture outperforms generally used MLP and other existing applied models on public RL simulation platforms.
Deep reinforcement learning has made impressive advances in sequential decision making problems recently. Constructive reinforcement learning (RL) algorithms have been proposed to focus on the policy optimization process, while further research on the effect of different policy network has not been fully explored. MLPs, LSTMs and linear layer are complementary in their controlling capabilities, as MLPs are appropriate for global control, LSTMs are able to exploit history information and linear layer is good at stabilizing system dynamics. In this paper, we propose a Proportional-Integral (PI) neural network architecture that could be easily combined with popular optimization algorithms. This PI-patterned policy network exploits the advantages of integral control and linear control that are widely applied in classic control systems, based on which an ensemble-learning-based model is trained to further improve the sample efficiency and training performance on most RL tasks. Experimental results on public RL simulation platforms demonstrate the proposed architecture could achieve better performance than generally used MLP and other existing applied models.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available