3.8 Proceedings Paper

2D3D-MatchNet: Learning to Match Keypoints Across 2D Image and 3D Point Cloud

出版社

IEEE
DOI: 10.1109/icra.2019.8794415

关键词

-

资金

  1. National Research Foundation (NRF) Singapore through the Singapore-MIT Alliance for Research and Technology's (FM IRG)
  2. Singapore MOE Tier 1 grant [R-252-000-637-112]

向作者/读者索取更多资源

Large-scale point cloud generated from 3D sensors is more accurate than its image-based counterpart. However, it is seldom used in visual pose estimation due to the difficulty in obtaining 2D-3D image to point cloud correspondences. In this paper, we propose the 2D3D-MatchNet - an end-to-end deep network architecture to jointly learn the descriptors for 2D and 3D keypoint from image and point cloud, respectively. As a result, we are able to directly match and establish 2D-3D correspondences from the query image and 3D point cloud reference map for visual pose estimation. We create our Oxford 2D-3D Patches dataset from the Oxford Robotcar dataset with the ground truth camera poses and 2D-3D image to point cloud correspondences for training and testing the deep network. Experimental results verify the feasibility of our approach.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据