4.6 Article

Hierarchical Intermittent Motor Control With Deterministic Policy Gradient

期刊

IEEE ACCESS
卷 7, 期 -, 页码 41799-41810

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/ACCESS.2019.2904910

关键词

Hierarchical reinforcement learning; intermittent control; deterministic policy gradient; continuous action control; motor control

资金

  1. National Natural Science Foundation of China [91748122]
  2. Shanghai Science and Technology Committee [17JC1400603]
  3. Natural Science Foundation Program of Shanghai [18ZR1442700]

向作者/读者索取更多资源

It has been evidenced that the neural motor control exploits the hierarchical and intermittent representation. In this paper, we propose a hierarchical deep reinforcement learning (DRL) method to learn the continuous control policy across multiple levels, by unifying the neuroscience principle of the minimum transition hypothesis. The control policies in the two levels of the hierarchy operate at different time scales. The high-level controller produces the intermittent actions to set a sequence of goals for the low-level controller, which in turn conducts the basic skills with the modulation of goals. The goal planning and the basic motor skills are trained jointly with the proposed algorithm: hierarchical intermittent deep deterministic policy gradient (HI-DDPG). The performance of the method is validated in two continuous control problems. The results show that the method successfully learns to temporally decompose compound tasks into sequences of basic motions with sparse transitions and outperforms the previous DRL methods that lack a hierarchical continuous representation.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据