4.7 Article

IRFR-Net: Interactive Recursive Feature-Reshaping Network for Detecting Salient Objects in RGB-D Images

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNNLS.2021.3105484

Keywords

Feature extraction; Data mining; Streaming media; Logic gates; Fuses; Adaptation models; Visualization; Context extraction module (CEM); gated attention fusion module (GAFM); red-green-blue depth (RGB-D) information; reshaped-feature fusion module (RFFM); salient object detection (SOD); weighted atrous spatial pyramid pooling (WASPP)

Funding

  1. National Natural Science Foundation of China [61502429, 61972357]
  2. Zhejiang Provincial Natural Science Foundation of China [LY18F020012]

Ask authors/readers for more resources

Combining a gated attention mechanism and linear fusion method in a dual-stream interactive recursive feature-reshaping network (IRFR-Net) has shown significant advantages in salient object detection, outperforming 11 state-of-the-art RGB-D approaches in various evaluation indicators, as demonstrated through comprehensive experiments.
Using attention mechanisms in saliency detection networks enables effective feature extraction, and using linear methods can promote proper feature fusion, as verified in numerous existing models. Current networks usually combine depth maps with red-green-blue (RGB) images for salient object detection (SOD). However, fully leveraging depth information complementary to RGB information by accurately highlighting salient objects deserves further study. We combine a gated attention mechanism and a linear fusion method to construct a dual-stream interactive recursive feature-reshaping network (IRFR-Net). The streams for RGB and depth data communicate through a backbone encoder to thoroughly extract complementary information. First, we design a context extraction module (CEM) to obtain low-level depth foreground information. Subsequently, the gated attention fusion module (GAFM) is applied to the RGB depth (RGB-D) information to obtain advantageous structural and spatial fusion features. Then, adjacent depth information is globally integrated to obtain complementary context features. We also introduce a weighted atrous spatial pyramid pooling (WASPP) module to extract the multiscale local information of depth features. Finally, global and local features are fused in a bottom-up scheme to effectively highlight salient objects. Comprehensive experiments on eight representative datasets demonstrate that the proposed IRFR-Net outperforms 11 state-of-the-art (SOTA) RGB-D approaches in various evaluation indicators.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available