4.7 Article

Regularly updated deterministic policy gradient algorithm

Journal

KNOWLEDGE-BASED SYSTEMS
Volume 214, Issue -, Pages -

Publisher

ELSEVIER
DOI: 10.1016/j.knosys.2020.106736

Keywords

Reinforcement learning; Deterministic policy gradient; Experience replay

Funding

  1. Natural Science Research Foundation of Jilin Province of China [20180101053JC]
  2. National Key RAMP
  3. D Program of China [2017YFB1003103]
  4. National Natural Science Foundation of China [61300049]

Ask authors/readers for more resources

This paper introduces a new reinforcement learning algorithm RUD to address the inefficiency and instability of DDPG, demonstrating that RUD can better utilize new data and is more suitable for a specific strategy in terms of Q value variance. The experiments validate the effectiveness and superiority of RUD.
Deep Deterministic Policy Gradient (DDPG) algorithm is one of the most well-known reinforcement learning methods. However, this method is inefficient and unstable in practical applications. On the other hand, the bias and variance of the Q estimation in the target function are sometimes difficult to control. This paper proposes a Regularly Updated Deterministic (RUD) policy gradient algorithm for these problems. This paper theoretically proves that the learning procedure with RUD can make better use of new data in replay buffer than the traditional procedure. In addition, the low variance of the Q value in RUD is more suitable for the current Clipped Double Q-learning strategy. This paper has designed a comparison experiment against previous methods, an ablation experiment with the original DDPG, and other analytical experiments in Mujoco environments. The experimental results demonstrate the effectiveness and superiority of RUD. (C) 2020 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available