4.7 Article

iSimLoc: Visual Global Localization for Previously Unseen Environments With Simulated Images

期刊

IEEE TRANSACTIONS ON ROBOTICS
卷 39, 期 3, 页码 1893-1909

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TRO.2023.3238201

关键词

Visualization; Feature extraction; Location awareness; Lighting; Cameras; Global Positioning System; Urban areas; Aerial visual terrain navigation; GPS denied localization; hierarchical global relocalization; sim-to-real

类别

向作者/读者索取更多资源

This article introduces iSimLoc, a learning-based global relocalization approach that can match visual data with significant appearance and viewpoint differences. The method utilizes a place recognition network to match query images to reference images of different stylistic domains and viewpoints. The hierarchical global relocalization module enables fast and accurate pose estimation. iSimLoc achieves high successful retrieval rates and outperforms other methods in terms of inference time.
The camera is an attractive device for use in beyond visual line of sight drone operation since cameras are low in size, weight, power, and cost. However, state-of-the-art visual localization algorithms have trouble matching visual data that have significantly different appearances due to changes in illumination or viewpoint. This article presents iSimLoc, a learning-based global relocalization approach that is robust to appearance and viewpoint differences. The features learned by iSimLoc's place recognition network can be utilized to match query images to reference images of a different stylistic domain and viewpoint. In addition, our hierarchical global relocalization module searches in a coarse-to-fine manner, allowing iSimLoc to perform fast and accurate pose estimation. We evaluate our method on a dataset with appearance variations and a dataset that focuses on demonstrating large-scale matching over a long flight over complex terrain. iSimLoc achieves 88.7% and 83.8% successful retrieval rates on our two datasets, with 1.5 s inference time, compared to 45.8% and 39.7% using the next best method. These results demonstrate robust localization in a range of environments and conditions.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据