期刊
IEEE ROBOTICS AND AUTOMATION LETTERS
卷 6, 期 4, 页码 7270-7277出版社
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/LRA.2021.3097268
关键词
Deep learning for visual perception; deep learning methods; localization
类别
资金
- Autonomous Tunnel Exploitation (ATE) Project of Agency for Defense Development (ADD), Korea
This study introduces a point cloud registration method named Geometry Guided Network (G(2) Net) with spherical positional encoding and unsupervised geometry consistency loss to learn globally unique features, and demonstrates its superiority over current state-of-the-art models in experimental tasks.
Point cloud registration is a well-known way to align two different point clouds via a rigid transform estimation in robotics and computer vision applications. In particular, deep learning-based methods have recently attempted to extract highly distinguishable point features for each point. However, it is still challenging to learn highly distinctive point features in different locations but in a similar shape, bringing performance limitations. To address this issue, we propose Geometry Guided Network, namely G(2) Net, for point cloud registration with spherical positional encoding method and unsupervised geometry consistency loss. Combined with self-attention, the positional encoding learns the globally unique features by assigning global geometric positional information into irregular 3D points. The uniqueness of each feature is further strengthened with geometric consistency loss across two different point cloud sets. We demonstrate that G(2) Net outperforms current state-of-the-arts models in point cloud registration tasks with both full and partial registration experiments for the ModelNet40 and Augmented ICL-NUIM datasets. Various visualizations on learned features are provided for demonstrating the global shape-awareness of our methodology.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据