期刊
IEEE ROBOTICS AND AUTOMATION LETTERS
卷 8, 期 1, 页码 256-263出版社
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/LRA.2022.3223555
关键词
3D Place Recognition; Attention; Viewpoint-invariant Localization
类别
We propose SphereVLAD++, an attention-enhanced viewpoint invariant place recognition method, which projects point clouds onto a spherical perspective and captures contextual connections between local features and global 3D geometry distribution. It outperforms all relative state-of-the-art 3D place recognition methods, achieving successful retrieval rates of 7.06% and 28.15% under small or even totally reversed viewpoint differences. It also has low computation requirements and high time efficiency, making it suitable for low-cost robots.
LiDAR-based localization approach is a fundamental module for large-scale navigation tasks, such as last-mile delivery and autonomous driving, and localization robustness highly relies on viewpoints and 3D feature extraction. Our previous work provides a viewpoint-invariant descriptor to deal with viewpoint differences; however, the global descriptor suffers from a low signal-noise ratio in unsupervised clustering, reducing the distinguishable feature extraction ability. In this work, we develop SphereVLAD++, an attention-enhanced viewpoint invariant place recognition method. SphereVLAD++ projects the point cloud on the spherical perspective for each unique area and captures the contextual connections between local features and their dependencies with global 3D geometry distribution. In return, clustered elements within the global descriptor are conditioned on local and global geometries and support the original viewpoint-invariant property of SphereVLAD. In the experiments, we evaluated the localization performance of SphereVLAD++ on both the public KITTI360 dataset and self-generated datasets from the city of Pittsburgh. The experiment results show that SphereVLAD++ outperforms all relative state-of-the-art 3D place recognition methods under small or even totally reversed viewpoint differences and shows 7.06% and 28.15% successful retrieval rates with better than the second best. Low computation requirements and high time efficiency also help its application for low-cost robots.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据