4.7 Article

Deep learning for detecting robotic grasps

期刊

INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH
卷 34, 期 4-5, 页码 705-724

出版社

SAGE PUBLICATIONS LTD
DOI: 10.1177/0278364914549607

关键词

Robotic grasping; deep learning; RGB-D multi-modal data; Baxter; PR2; 3D feature learning

类别

资金

  1. ARO [W911-NF12-1-0267]
  2. Microsoft
  3. NSF CAREER
  4. Google

向作者/读者索取更多资源

We consider the problem of detecting robotic grasps in an RGB-D view of a scene containing objects. In this work, we apply a deep learning approach to solve this problem, which avoids time-consuming hand-design of features. This presents two main challenges. First, we need to evaluate a huge number of candidate grasps. In order to make detection fast and robust, we present a two-step cascaded system with two deep networks, where the top detections from the first are re-evaluated by the second. The first network has fewer features, is faster to run, and can effectively prune out unlikely candidate grasps. The second, with more features, is slower but has to run only on the top few detections. Second, we need to handle multimodal inputs effectively, for which we present a method that applies structured regularization on the weights based on multimodal group regularization. We show that our method improves performance on an RGBD robotic grasping dataset, and can be used to successfully execute grasps on two different robotic platforms.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据