4.7 Article

Hierarchical deep reinforcement learning to drag heavy objects by adult-sized humanoid robot

期刊

APPLIED SOFT COMPUTING
卷 110, 期 -, 页码 -

出版社

ELSEVIER
DOI: 10.1016/j.asoc.2021.107601

关键词

Humanoid robot; Deep reinforcement learning; Dragging object; Deep learning

资金

  1. 'Chinese Language and Technology Center' of National Taiwan Normal University (NTNU) from The Featured Areas Research Center Program by the Min-istry of Education (MOE) in Taiwan [MOST 108-2634-F-003-002, MOST 108-2634-F-003-003, MOST 108-2634-F-003-004, MOST 107-2811-E-003-503]
  2. Ministry of Science and Technology, Taiwan [MOST 108-2634-F-003-002, MOST 108-2634-F-003-003, MOST 108-2634-F-003-004, MOST 107-2811-E-003-503]

向作者/读者索取更多资源

The research introduces a novel hierarchical deep learning algorithm that learns how to drag heavy objects with an adult-sized humanoid robot for the first time. The algorithm utilizes a Three-layered Convolution Volumetric Network for 3D object classification, a lightweight real-time instance segmentation method for floor surface detection and classification, and a deep Q-learning algorithm for policy control of the robot's Center of Mass.
Most research on robot manipulation focuses on objects that are light enough for the robot to pick them up. However, in our daily life, some objects are too big or too heavy to be picked up or carried, so that dragging them is necessary. Although bipedal humanoid robots have nowadays good mobility on level ground, dragging unfamiliar objects including large and heavy objects on various surfaces is an interesting research area with many applications, which will provide insights into human manipulation and will encourage the development of novel algorithms for robot motion planning and control. This is a challenging problem, not only because of the unknown and potentially variable friction of the foot, but also because the feet of the robot may slip during unbalanced poses. In this paper, we propose a novel hierarchical deep learning algorithm that learns how to drag heavy objects with an adultsized humanoid robot for the first time. First, we present a Three-layered Convolution Volumetric Network (TCVN) for 3D object classification with point clouds volumetric occupancy grid integration. Second, we propose a lightweight real-time instance segmentation method named Tiny-YOLACT for the detection and classification of the floor surface. Third, we propose a deep Q-learning algorithm to learn the policy control of the Center of Mass of the robot (DQL-COM). The DQL-COM algorithm learning is bootstrapped using the ROS Gazebo simulator. After initial training, we complete training on the THORMANG-Wolf, a 1.4 m tall adult-sized humanoid robot with 27 degrees of freedom and weighing 48 kg, on three distinct types of surfaces. We evaluate the performance of our approach by dragging eight different types of objects (e.g., a small suitcase, a large suitcase, a chair). The extensive experiments (480 times on the real robot) included dragging a heavy object with a mass of 84.6 kg (two times greater than the robot's weight) and showed remarkable success rates of 92.92% when combined with the force-torque sensors, and 83.75% without force-torque sensors. (C) 2021 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据