4.6 Article

Sparse-PointNet: See Further in Autonomous Vehicles

期刊

IEEE ROBOTICS AND AUTOMATION LETTERS
卷 6, 期 4, 页码 7049-7056

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/LRA.2021.3096253

关键词

Sensor fusion; object detection; deep learning for visual perception

类别

资金

  1. DFG Centre of Excellence [2117, 422037984, SFB TRR 161]

向作者/读者索取更多资源

The method proposed in this study effectively addresses the issue of significant performance degradation in detecting objects beyond 50 meters by introducing a new key point sampling algorithm and dynamic continuous occupancy heatmap, achieving superior performance in the far range while maintaining comparable performance in the near range.
Since the density of LiDAR points reduces significantly with increasing distance, popular 3D detectors tend to learn spatial features from dense points and ignore very sparse points in the far range. As a result, their performance degrades dramatically beyond 50 meters. Motivated by the above problem, we introduce a novel approach to jointly detect objects from multimodal sensor data, with two main contributions. First, we leverage PointPainting [15] to develop a new key point sampling algorithm, which encodes the complex scene into a few representative points with approximately similar point density. Further, we fuse a dynamic continuous occupancy heatmap to refine the final proposal. In addition, we feed radar points into the network, which allows it to take into account additional cues. We evaluate our method on the widely used nuScenes dataset. Our method outperforms all state-of-the-art methods in the far range by a large margin and also achieves comparable performance in the near range.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据