4.6 Article

ASIF-Net: Attention Steered Interweave Fusion Network for RGB-D Salient Object Detection

Journal

IEEE TRANSACTIONS ON CYBERNETICS
Volume 51, Issue 1, Pages 88-100

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCYB.2020.2969255

Keywords

Feature extraction; Saliency detection; Object detection; Task analysis; Fuses; Random access memory; Semantics; Adversarial learning; depth cue; interweave fusion; residual attention; RGB-D images; saliency detection

Funding

  1. Dr. Cong's Project of the Fundamental Research Funds for the Central Universities [2019RC039]
  2. National Natural Science Foundation of China [61771334, 61871342, 61872350, 61672443, 61931008, 61836002, U1636214]
  3. Hong Kong Research Grants Council General Research Funds [9042038 (CityU 11205314), 9042322 (CityU 11200116)]
  4. Hong Kong Research Grants Council Early Career Schemes [9048123 (CityU 21211518)]
  5. China Postdoctoral Support Scheme for Innovative Talents [BX20180236]

Ask authors/readers for more resources

ASIF-Net proposes an attention-steered interweave fusion network for salient object detection from RGB-D images, effectively addressing the inconsistency in cross-modal data and capturing complementarity. By introducing attention mechanism and adversarial learning, the method excels in locating potential salient regions and ensuring detected objects have specific characteristics.
Salient object detection from RGB-D images is an important yet challenging vision task, which aims at detecting the most distinctive objects in a scene by combining color information and depth constraints. Unlike prior fusion manners, we propose an attention steered interweave fusion network (ASIF-Net) to detect salient objects, which progressively integrates cross-modal and cross-level complementarity from the RGB image and corresponding depth map via steering of an attention mechanism. Specifically, the complementary features from RGB-D images are jointly extracted and hierarchically fused in a dense and interweaved manner. Such a manner breaks down the barriers of inconsistency existing in the cross-modal data and also sufficiently captures the complementarity. Meanwhile, an attention mechanism is introduced to locate the potential salient regions in an attention-weighted fashion, which advances in highlighting the salient objects and suppressing the cluttered background regions. Instead of focusing only on pixelwise saliency, we also ensure that the detected salient objects have the objectness characteristics (e.g., complete structure and sharp boundary) by incorporating the adversarial learning that provides a global semantic constraint for RGB-D salient object detection. Quantitative and qualitative experiments demonstrate that the proposed method performs favorably against 17 state-of-the-art saliency detectors on four publicly available RGB-D salient object detection datasets. The code and results of our method are available at https://github.com/Li-Chongyi/ASIF-Net.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available