4.8 Article

CVTNet: A Cross-View Transformer Network for LiDAR-Based Place Recognition in Autonomous Driving Environments

Journal

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TII.2023.3313635

Keywords

Autonomous vehicles; Autonomous driving; LiDAR place recognition (LPR); multiview fusion; transformer network

Ask authors/readers for more resources

In this article, a cross-view transformer-based network called CVTNet is proposed to fuse different views generated by LiDAR data for place recognition in GPS-denied environments. Experimental results show that the method outperforms existing techniques in terms of robustness to viewpoint changes and long-time spans, while also exhibiting better real-time performance.
LiDAR-based place recognition (LPR) is one of the most crucial components of autonomous vehicles to identify previously visited places in GPS-denied environments. Most existing LPR methods use mundane representations of the input point cloud without considering different views, which may not fully exploit the information from LiDAR sensors. In this article, we propose a cross-view transformer-based network, dubbed CVTNet, to fuse the range image views and bird's eye views generated from the LiDAR data. It extracts correlations within the views using intratransformers and between the two different views using intertransformers. Based on that, our proposed CVTNet generates a yaw-angle-invariant global descriptor for each laser scan end-to-end online and retrieves previously seen places by descriptor matching between the current query scan and the prebuilt database. We evaluate our approach on three datasets collected with different sensor setups and environmental conditions. The experimental results show that our method outperforms the state-of-the-art LPR methods with strong robustness to viewpoint changes and long-time spans. Furthermore, our approach has better real-time performance that can run faster than the typical LiDAR frame rate does.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available