期刊
IEEE ROBOTICS AND AUTOMATION LETTERS
卷 7, 期 3, 页码 7904-7911出版社
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/LRA.2022.3185783
关键词
Object detection; segmentation and categorization; deep learning for visual perception; recognition
类别
资金
- Australian Centre for Field Robotics
- Baraja Pty Ltd.
This research proposes a novel unsupervised multi-target domain adaptation framework called SEE to address the sampling discrepancies between different lidar sensors. By interpolating and normalizing the scan patterns, the performance of state-of-the-art 3D detectors can be transferred to different types of lidars without the need for model fine-tuning.
Sampling discrepancies between different manufacturers and models of lidar sensors result in inconsistent representations of objects. This leads to performance degradation when 3D detectors trained for one lidar are tested on other types of lidars. Remarkable progress in lidar manufacturing has brought about advances in mechanical, solid-state, and recently, adjustable scan pattern lidars. For the latter, existing works often require fine-tuning the model each time scan patterns are adjusted, which is infeasible. We explicitly deal with the sampling discrepancy by proposing a novel unsupervised multi-target domain adaptation framework, SEE, for transferring the performance of state-of-the-art 3D detectors across both fixed and flexible scan pattern lidars without requiring fine-tuning of models by end-users. Our approach interpolates the underlying geometry and normalises the scan pattern of objects from different lidars before passing them to the detection network. We demonstrate the effectiveness of SEE on public datasets, achieving state-of-the-art results, and additionally provide quantitative results on a novel high-resolution lidar to prove the industry applications of our framework.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据