4.7 Article

Empowering the Diversity and Individuality of Option: Residual Soft Option Critic Framework

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNNLS.2021.3128666

关键词

Entropy; Task analysis; Reinforcement learning; Mutual information; Diversity reception; Convergence; Games; Deep reinforcement learning (RL); diversity and individuality; hierarchical RL (HRL); option critic; residual

向作者/读者索取更多资源

Extracting temporal abstraction is a crucial challenge in hierarchical reinforcement learning. This study proposes methods to address the challenge through diversity and individuality perspectives.
Extracting temporal abstraction (option), which empowers the action space, is a crucial challenge in hierarchical reinforcement learning. Under a well-structured action space, decision-making agents can probe more deeply in the searching or plan efficiently through pruning irrelevant action candidates. However, automatically capturing a well-performed temporal abstraction is a nontrivial challenge due to its insufficient exploration and inadequate functionality. We consider alleviating this challenge from two perspectives, i.e., diversity and individuality. For the aspect of diversity, we propose a maximum entropy model based on ensembled options to encourage exploration. For the aspect of individuality, we propose to distinguish each option accurately, utilizing mutual formation minimization, so that each option can better express and function. We name our framework as an ensemble with soft option (ESO) critics. Furthermore, the residual algorithm (RA) with a bidirectional target network is introduced to stabilize bootstrapping, yielding a residual version of ESO. We provide detailed analysis for extensive experiments, which shows that our method boosts performance in commonly used continuous control tasks.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据