期刊
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022)
卷 -, 期 -, 页码 6667-6676出版社
IEEE COMPUTER SOC
DOI: 10.1109/CVPR52688.2022.00656
关键词
-
资金
- National Research Foundation, Singapore under its AI Singapore Program (AISG Award) [AISG2-RP-2020-016]
- Tier 2 grant from Singapore Ministry of Education [MOE-T2EP20120-0011]
In this paper, a point cloud registration method using attention mechanisms is proposed, which replaces traditional feature matching and RANSAC algorithms by directly predicting the final set of point correspondences. Experimental results show that the proposed method achieves state-of-the-art performance on several benchmarks.
Despite recent success in incorporating learning into point cloud registration, many works focus on learning feature descriptors and continue to rely on nearest-neighbor feature matching and outlier filtering through RANSAC to obtain the final set of correspondences for pose estimation. In this work, we conjecture that attention mechanisms can replace the role of explicit feature matching and RANSAC, and thus propose an end-to-end framework to directly predict the final set of correspondences. We use a network architecture consisting primarily of transformer layers containing self and cross attentions, and train it to predict the probability each point lies in the overlapping region and its corresponding position in the other point cloud. The required rigid transformation can then be estimated directly from the predicted correspondences without further post-processing. Despite its simplicity, our approach achieves state-of-the-art performance on 3DMatch and ModelNet benchmarks. Our source code can be found at https://github.com/yewzijian/RegTR.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据