3.8 Proceedings Paper

Radar-Camera Pixel Depth Association for Depth Completion

出版社

IEEE COMPUTER SOC
DOI: 10.1109/CVPR46437.2021.01232

关键词

-

资金

  1. Ford-MSU Alliance

向作者/读者索取更多资源

This study proposes a mapping method from radar returns to pixels to achieve image-guided radar and video depth completion. By integrating radar and video data at the pixel level, superior performance to using camera and radar alone is demonstrated.
While radar and video data can be readily fused at the detection level, fusing them at the pixel level is potentially more beneficial. This is also more challenging in part due to the sparsity of radar, but also because automotive radar beams are much wider than a typical pixel combined with a large baseline between camera and radar, which results in poor association between radar pixels and color pixel. A consequence is that depth completion methods designed for LiDAR and video fare poorly for radar and video. Here we propose a radar-to-pixel association stage which learns a mapping from radar returns to pixels. This mapping also serves to densify radar returns. Using this as a first stage, followed by a more traditional depth completion method, we are able to achieve imageguided depth completion with radar and video. We demonstrate performance superior to camera and radar alone on the nuScenes dataset. Our source code is available at https://github.com/longyunf/rc-pda.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据