期刊
IEEE ROBOTICS AND AUTOMATION LETTERS
卷 6, 期 2, 页码 2397-2404出版社
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/LRA.2021.3061332
关键词
Field robots; SLAM; semantic scene understanding
类别
资金
- ARL [DCIST CRA W911NF-17-2-0181]
- NSF [CNS-1521617]
- ARO [W911NF-13-1-0350]
- ONR [N00014-20-1-2822, N00014-20-S-B001]
- Qualcomm Research
- C-BRIC
- Semiconductor Research Corporation Joint University Microelectronics Program - DARPA
- NASA Space Technology Research Fellowship
The paper introduces a real-time method for globally localizing a robot using semantics, which utilizes egocentric 3D semantically labelled LiDAR and IMU as well as top-down RGB images obtained from satellites or aerial robots. The method builds a globally registered, semantic map of the environment as it runs, showing better than 10 m accuracy, high robustness, and the ability to estimate the scale of a top-down map on the fly if it is initially unknown.
Currently, GPS is by far the most popular global localization method. However, it is not always reliable or accurate in all environments. SLAM methods enable local state estimation but provide no means of registering the local map to a global one, which can be important for inter-robot collaboration or human interaction. In this work, we present a real-time method for utilizing semantics to globally localize a robot using only egocentric 3D semantically labelled LiDAR and IMU as well as top-down RGB images obtained from satellites or aerial robots. Additionally, as it runs, our method builds a globally registered, semantic map of the environment. We validate our method on KITTI as well as our own challenging datasets, and show better than 10 m accuracy, a high degree of robustness, and the ability to estimate the scale of a top-down map on the fly if it is initially unknown.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据