4.7 Article

Adaptive Feature Attention Module for Robust Visual-LiDAR Fusion-Based Object Detection in Adverse Weather Conditions

期刊

REMOTE SENSING
卷 15, 期 16, 页码 -

出版社

MDPI
DOI: 10.3390/rs15163992

关键词

multi-sensor fusion; deep fusion; object detection; deep learning

向作者/读者索取更多资源

Object detection is crucial for autonomous navigation in dynamic environments and camera and lidar sensors are commonly used for efficient object detection. However, they are affected by adverse weather conditions and illumination changes. To address this challenge, this study proposes an adaptive feature attention module (AFAM) for robust multisensory data fusion-based object detection. The AFAM computes uncertainty among camera and lidar features and adaptively refines them via attention. Experimental results demonstrate that the AFAM significantly enhances the overall detection accuracy of an object detection network.
Object detection is one of the vital components used for autonomous navigation in dynamic environments. Camera and lidar sensors have been widely used for efficient object detection by mobile robots. However, they suffer from adverse weather conditions in operating environments such as sun, fog, snow, and extreme illumination changes from day to night. The sensor fusion of camera and lidar data helps to enhance the overall performance of an object detection network. However, the diverse distribution of training data makes the efficient learning of the network a challenging task. To address this challenge, we systematically study the existing visual and lidar features based on object detection methods and propose an adaptive feature attention module (AFAM) for robust multisensory data fusion-based object detection in outdoor dynamic environments. Given the camera and lidar features extracted from the intermediate layers of EfficientNet backbones, the AFAM computes the uncertainty among the two modalities and adaptively refines visual and lidar features via attention along the channel and the spatial axis. The AFAM integrated with the EfficientDet performs the adaptive recalibration and fusion of visual lidar features by filtering noise and extracting discriminative features for an object detection network under specific environmental conditions. We evaluate the AFAM on a benchmark dataset exhibiting weather and light variations. The experimental results demonstrate that the AFAM significantly enhances the overall detection accuracy of an object detection network.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据