4.6 Article

AF2R Net: Adaptive Feature Fusion and Robust Network for Efficient and Precise Depth Completion

Journal

IEEE ACCESS
Volume 11, Issue -, Pages 111347-111357

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/ACCESS.2023.3320681

Keywords

Depth completion; Depth completion; deep learning; deep learning; fusion strategy; fusion strategy; multi-modality features; multi-modality features; convolutional spatial network; convolutional spatial network; depth refinement; depth refinement

Ask authors/readers for more resources

Depth completion is a fundamental method for autonomous vehicles and robotics to acquire precise depth maps. However, fusing multi-model features and restoring details are still two main challenges. In this study, we propose a fusion net composed of two-branch backbone and depth refinement module, achieving state-of-the-art performance in depth completion tasks.
In order to acquire precise depth maps, depth completion is a fundamental method for autonomous vehicles and robotics. Recent methods mainly focus on fusing multi-model information from sparse depth maps and color images to recover dense depth maps. Previous researches have made remarkable contributions in predicting depth values, but how to better fuse multi-model features, and how to better restore details are still two main issues. Aiming at these two issues, we propose a fusion net composed of two-branch backbone and depth refinement module. The backbone aims to extract and combine the features of sparse depths and color images, in which we adopt the strategies of symmetric gated fusion and pixel-shuffle for cross-branch and branch-wise fusion respectively. Then, we designed a new module named dilation-pyramid convolution spatial propagation network (DP-CSPN) for depth refinement which enlarges the propagation neighborhoods and obtains more local affinities than CSPN. Finally, to better process details, we designed loss functions to achieve clearer edges as well as to be aware of tiny structures. Our method achieves the state-of-the-art (SoTA) performance in NYU-Depth-v2 Dataset and KITTI Depth Completion Dataset, and we got the achievement of top 5 in mobile intelligent photography and imaging (MIPI) challenge held by European Conference on Computer Vision (ECCV) 2022.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available