3.8 Proceedings Paper

OnePose: One-Shot Object Pose Estimation without CAD Models

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/CVPR52688.2022.00670

Keywords

-

Funding

  1. National Key Research and Development Program of China [2020AAA0108901]
  2. NSFC [62172364]
  3. ZJU-SenseTime Joint Lab of 3D Vision

Ask authors/readers for more resources

OnePose is a novel method for object pose estimation that does not rely on CAD models and can handle objects in arbitrary categories. It efficiently matches 2D interest points in query images with 3D points in the SfM model, enabling stable detection and tracking of 6D poses in real-time.
We propose a new method named OnePose for object pose estimation. Unlike existing instance-level or category-level methods, OnePose does not rely on CAD models and can handle objects in arbitrary categories without instance-or category-specific network training. OnePose draws the idea from visual localization and only requires a simple RGB video scan of the object to build a sparse SfM model of the object. Men, this model is registered to new query images with a generic feature matching network. To mitigate the slow runtime of existing visual localization methods, we propose a new graph attention network that directly matches 2D interest points in the query image with the 3D points in the SJM model, resulting in efficient and robust pose estimation. Coinbined with a feature-based pose tracker, OnePose is able to stably detect and track 6D poses of everyday household objects in real-time. We also collected a large-scale dataset that consists of 450 sequences of 150 objects. Code and data are available at the project page: https://zju3dv.github.io/onepose/.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available