4.6 Article

Object Pose Estimation Incorporating Projection Loss and Discriminative Refinement

Journal

IEEE ACCESS
Volume 9, Issue -, Pages 18597-18606

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/ACCESS.2021.3054493

Keywords

Feature extraction; Pose estimation; Three-dimensional displays; Solid modeling; Data models; Location awareness; Two dimensional displays; Object pose estimation; LINEMOD; occlusion LINEMOD; deep learning; convolutional neural network

Funding

  1. Chinese Language and Technology Center of the National Taiwan Normal University (NTNU) through the Featured Areas Research Center Program within the framework of the Higher Education Sprout Project by the Ministry of Education (MOE), Taiwan
  2. Ministry of Science and Technology, Taiwan, through the Pervasive Artificial Intelligence Research (PAIR) Labs [MOST 109-2634-F-003-006, MOST 109-2634-F-003-007]

Ask authors/readers for more resources

In this paper, a new method is proposed to address the issues in 3D object pose estimation, including the use of a projection loss function and refinement network to correct pose predictions. Experimental results show that this method outperforms existing techniques in terms of accuracy and practicality.
The accurate estimation of three-dimensional (3D) object pose is important in a wide range of applications, such as robotics and augmented reality. The key to estimate object poses is matching feature points in the captured image with predefined ones of the 3D model of the object. Existing learning-based pose estimation systems utilize a voting strategy to estimate the feature points in a vector space for improving the accuracy of the estimated pose. However, the loss function of such approaches only takes account of the direction of the vector, resulting in an error-prone localization of feature points. Therefore, this paper considers a projection loss function dealing with the error of the vector field and incorporates a refinement network to revise the predicted pose to obtain a good final output. Experimental results show that the proposed methods outperform the state-of-the-art methods in ADD(-S) metric on the LINEMOD and Occlusion LINEMOD datasets. Moreover, the proposed method can be applied to real-world practical scenarios in real time to simultaneously estimate the poses of multiple objects.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available