4.5 Article

Generative Robotic Grasping Using Depthwise Separable Convolution

期刊

COMPUTERS & ELECTRICAL ENGINEERING
卷 94, 期 -, 页码 -

出版社

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.compeleceng.2021.107318

关键词

Deep learning; Object grasping; Real-time detection; Robot vision; Light-weight network

向作者/读者索取更多资源

The paper introduces an end-to-end approach method for grasp detection using deep learning, which significantly improves accuracy by modeling relationships using depthwise and pointwise convolutions and performs well on the Jacquard dataset in predicting grasp points on novel class objects.
In this paper, we present an end-to-end approach method using deep learning for grasp detection. Our method is a real-time processing method for discrete depth image sampling and the problems of long calculation times and difficulty in registration caused by object modelling and global searching in traditional methods. The method uses depthwise convolution and pointwise convolution to model the relations among the channels and directly parameterizes a grasp quality value for every pixel. Our method calculates a rectangular grasping box to generate a grasping pose for an input image. For the experimental evaluation on the Jacquard dataset, we compared the proposed method with other baseline methods, and the accuracy of the proposed method was improved by 5% to 7% that shows our method can effectively predict grasp points on novel class objects.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据