4.5 Article

Goal-aware generative adversarial imitation learning from imperfect demonstration for robotic cloth manipulation

期刊

ROBOTICS AND AUTONOMOUS SYSTEMS
卷 158, 期 -, 页码 -

出版社

ELSEVIER
DOI: 10.1016/j.robot.2022.104264

关键词

Generative adversarial imitation learning; Robotic cloth manipulation; Deep reinforcement learning

资金

  1. JSPS, Japan KAKENHI [21H03522]
  2. JSPS, Japan Research Fellow Grant [20J11948]

向作者/读者索取更多资源

Generative Adversarial Imitation Learning (GAIL) is a method that can learn policies from demonstrations without explicitly defining the reward function. This paper proposes Goal-Aware Generative Adversarial Imitation Learning (GA-GAIL), which addresses the issue of imperfect demonstration data by introducing a second discriminator to distinguish the goal state. GA-GAIL extends the standard GAIL framework to robustly learn desirable policies even from imperfect demonstrations.
Generative Adversarial Imitation Learning (GAIL) can learn policies without explicitly defining the reward function from demonstrations. GAIL has the potential to learn policies with high-dimensional observations as input, e.g., images. By applying GAIL to a real robot, perhaps robot policies can be obtained for daily activities like washing, folding clothes, cooking, and cleaning. However, human demonstration data are often imperfect due to mistakes, which degrade the performance of the resulting policies. We address this issue by focusing on the following features: (1) many robotic tasks are goal-reaching tasks, and (2) labeling such goal states in demonstration data is relatively easy. With these in mind, this paper proposes Goal-Aware Generative Adversarial Imitation Learning (GA-GAIL), which trains a policy by introducing a second discriminator to distinguish the goal state in parallel with the first discriminator that indicates the demonstration data. This extends a standard GAIL framework to more robustly learn desirable policies even from imperfect demonstrations through a goal-state discriminator that promotes achieving the goal state. Furthermore, GA-GAIL employs the Entropy -maximizing Deep P-Network (EDPN) as a generator, which considers both the smoothness and causal entropy in the policy update, to achieve stable policy learning from two discriminators. Our proposed method was successfully applied to two real-robotic cloth-manipulation tasks: turning a handkerchief over and folding clothes. We confirmed that it learns cloth-manipulation policies without task-specific reward function design. Video of the real experiments are available at this URL.(c) 2022 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据