Journal
IEEE ROBOTICS AND AUTOMATION LETTERS
Volume 6, Issue 3, Pages 4249-4256Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/LRA.2021.3064211
Keywords
Deep learning in grasping and manipulation; force and tactile sensing; perception for grasping and manipulation
Categories
Funding
- National Science Foundation under Grant NSF NRI [1925194]
- Direct For Computer & Info Scie & Enginr
- Div Of Information & Intelligent Systems [1925194] Funding Source: National Science Foundation
Ask authors/readers for more resources
The proposed framework utilizes a support set of CAD models to augment tactile observations, facilitating object recognition, visualization, and developing manipulation policies solely using tactile samples. By employing the uGPIS surface reconstruction method and prior knowledge from the CAD model support set, grasp regions can be successfully detected.
Planning object manipulation policies based on tactile observations alone is a challenging task due to the multi-factorial variances in the measured point cloud (e.g. sparsity, missing regions, rotation, etc.) and the limited sensory information available through tactile sensing. Nevertheless, the mainstream grasp planners are designed for well-structured point cloud data, and lack the crucial ability to plan grasps in unexplored regions that are common during tactile sampling. Hence, it is crucial to detect the grasp regions from incomplete and unstructured tactile point cloud data. To address this limitation, we propose a novel framework that utilizes a support set of CAD models to augment the tactile observations, and thereby facilitate object recognition, visualization, and developing manipulation policies solely using tactile samples. To cope with the noise and sparsity of tactile observations, we propose uGPIS, a surface reconstruction method that utilizes the occupancy possibility function and the Gaussian Process Regression to recover the underlying surface from tactile point clouds. Then, we complete the partially observed tactile point cloud using the prior knowledge obtained from the support set of full CAD models. This prior information will provide the enriched geometric information that is crucial to determine the grasp regions. Our experimental results on a physical simulation show that our method can successfully combine the prior knowledge from the database to enhance the grasp success rate.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available