3.8 Proceedings Paper

RPFA-Net: a 4D RaDAR Pillar Feature Attention Network for 3D Object Detection

Journal

Publisher

IEEE
DOI: 10.1109/ITSC48978.2021.9564754

Keywords

-

Funding

  1. National High Technology Research and Development Program of China [2018YFE0204300]
  2. Beijing Science and Technology Plan Project [Z191100007419008]
  3. National Natural Science Foundation of China [U1964203]
  4. Funding for National Defense Basic Research Program [6142006190201]
  5. China Postdoctoral Fund [2021M691780]
  6. Guoqiang Research Institute Project [2019GQG1010]

Ask authors/readers for more resources

In this study, a novel approach named RPFA-Net is proposed, utilizing a 4D RaDAR sensor and a self-attention mechanism to enhance the ability of regressing object heading angles and improving detection accuracy. Compared with existing methods, RPFA-Net achieves an increase of 8.13% in 3D mAP and 5.52% in BEV mAP, outperforming state-of-the-art 3D detection methods on the Astyx HiRes 2019 dataset.
3D object detection is a crucial problem in environmental perception for autonomous driving. Currently, most works focused on LiDAR, camera, or their fusion, while very few algorithms involve a RaDAR sensor, especially 4D RaDAR providing 3D position and velocity information. 4D RaDAR can work well in bad weather and has a higher performance than traditional 3D RaDAR, but it also contains lots of noise information and suffers measurement ambiguities. Existing 3D object detection methods can't judge the heading of objects by focusing on local features in sparse point clouds. To better overcome this problem, we propose a new method named RPFA-Net only using a 4D RaDAR, which utilizes a self-attention mechanism instead of PointNet to extract point clouds' global features. These global features containing long-distance information can effectively improve the network's ability to regress the heading angle of objects and enhance detection accuracy. Our method's performance is enhanced by 8.13% of 3D mAP and 5.52 % of BEV mAP compared with the baseline. Extensive experiments show that RPFA-Net surpasses state-of-the-art 3D detection methods on Astyx HiRes 2019 dataset. The code and pre-trained models are available at https://github.com/adept-thu/RPFA-Net.git.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available