4.7 Article

A grasps-generation-and-selection convolutional neural network for a digital twin of intelligent robotic grasping

Journal

Publisher

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.rcim.2022.102371

Keywords

Intelligent robotic grasping; Digital twin; Convolutional neural network; Deep learning; RGB-D image

Funding

  1. National Key R&D Program of China [2019YFB1705200]
  2. Zhejiang Provincial Natural Science Foundation of China [LZ22E050006, LY21E050008]
  3. State Key Laboratory of Fluid Power and Mechatronic Systems [SKLoFP_ZZ_2102]
  4. Science and Technology Projects of Inner Mongolia Autonomous Region [2020GG0275]

Ask authors/readers for more resources

This paper proposes a new grasps-generation-and-selection convolutional neural network (GGS-CNN) for robotic grasping. The GGS-CNN transforms RGB-D images into feature maps and evaluates the quality of selected grasps to generate grasp candidates. The proposed method achieves high success rates in grasping single objects and cluttered objects, and can obtain the best grasp in a short time.
Robotic grasping plays an essential role in human-machine cooperation in various household and industrial applications. Although humans can instinctively execute grasps in an accurate, stable, and rapid way even under a constantly changing environment, intelligent grasping remains a challenging task for robots. As a prerequisite for grasping, robots need to correctly identify the best grasping location of unknown objects often based on an artificial intelligence approach, which is still a challenging problem. This paper proposes a new grasps-generation-and-selection convolutional neural network (GGS-CNN), which is trained and implemented in a digital twin of intelligent robotic grasping (DTIRG). By defining a grasp with 3-D position, rotation angle, and gripper width, the GGS-CNN generates grasp candidates by transforming the red-green-blue-depth images (RGB-D images) into feature maps and evaluating the quality of selected grasps. The GGS-CNN is trained in the virtual environment and the real world of the DTIRG to detect accurate grasps. In the grasping tests, the proposed GGSCNN achieves grasping success rates of 96.7% and 93.8% for grasping single objects and cluttered objects, respectively, and obtains the best grasp from the RGB-D image in less than 40 ms.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available