期刊
INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH
卷 34, 期 4-5, 页码 705-724出版社
SAGE PUBLICATIONS LTD
DOI: 10.1177/0278364914549607
关键词
Robotic grasping; deep learning; RGB-D multi-modal data; Baxter; PR2; 3D feature learning
类别
资金
- ARO [W911-NF12-1-0267]
- Microsoft
- NSF CAREER
We consider the problem of detecting robotic grasps in an RGB-D view of a scene containing objects. In this work, we apply a deep learning approach to solve this problem, which avoids time-consuming hand-design of features. This presents two main challenges. First, we need to evaluate a huge number of candidate grasps. In order to make detection fast and robust, we present a two-step cascaded system with two deep networks, where the top detections from the first are re-evaluated by the second. The first network has fewer features, is faster to run, and can effectively prune out unlikely candidate grasps. The second, with more features, is slower but has to run only on the top few detections. Second, we need to handle multimodal inputs effectively, for which we present a method that applies structured regularization on the weights based on multimodal group regularization. We show that our method improves performance on an RGBD robotic grasping dataset, and can be used to successfully execute grasps on two different robotic platforms.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据