4.6 Article

iCaps: Iterative Category-Level Object Pose and Shape Estimation

期刊

IEEE ROBOTICS AND AUTOMATION LETTERS
卷 7, 期 2, 页码 1784-1791

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/LRA.2022.3142441

关键词

RGB-D Perception; deep learning for visual perception; perception for grasping and manipulation; category-level 6D pose estimation; object shape estimation

类别

向作者/读者索取更多资源

This letter proposes a category-level 6D object pose and shape estimation approach iCaps, which allows tracking and estimating the poses and shapes of unseen objects. It achieves this by using a category-level auto-encoder network and LatentNet.
This letter proposes a category-level 6D object pose and shape estimation approach iCaps, which allows tracking 6D poses of unseen objects in a category and estimating their 3D shapes. We develop a category-level auto-encoder network using depth images as input, where feature embeddings from the auto-encoder encode poses of objects in a category. The auto-encoder can be used in a particle filter framework to estimate and track 6D poses of objects in a category. By exploiting an implicit shape representation based on signed distance functions, we build a LatentNet to estimate a latent representation of the 3D shape given the estimated pose of an object. Then the estimated pose and shape can be used to update each other in an iterative way. Our category-level 6D object pose and shape estimation pipeline only requires 2D detection and segmentation for initialization. We evaluate our approach on a publicly available dataset and demonstrate its effectiveness. In particular, our method achieves comparably high accuracy on shape estimation.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据