3.8 Proceedings Paper

Antipodal Robotic Grasping using Generative Residual Convolutional Neural Network

出版社

IEEE
DOI: 10.1109/IROS45743.2020.9340777

关键词

-

资金

  1. Rochester Institute of Technology

向作者/读者索取更多资源

In this paper, we present a modular robotic system to tackle the problem of generating and performing antipodal robotic grasps for unknown objects from the n-channel image of the scene. We propose a novel Generative Residual Convolutional Neural Network (GR-ConvNet) model that can generate robust antipodal grasps from n-channel input at real-time speeds (similar to 20ms). We evaluate the proposed model architecture on standard datasets and a diverse set of household objects. We achieved state-of-the-art accuracy of 97.7% and 94.6% on Cornell and Jacquard grasping datasets, respectively. We also demonstrate a grasp success rate of 95.4% and 93% on household and adversarial objects, respectively, using a 7 DoF robotic arm.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据