4.7 Article

EGA-Net: Edge feature enhancement and global information attention network for RGB-D salient object detection

Journal

INFORMATION SCIENCES
Volume 626, Issue -, Pages 223-248

Publisher

ELSEVIER SCIENCE INC
DOI: 10.1016/j.ins.2023.01.032

Keywords

RGB-D salient object detection; Feature interaction and edge feature; enhancement; Global information guide integration; Hybrid loss function

Ask authors/readers for more resources

This study proposes a novel network, EGA-Net, to improve edge quality and highlight the main features of salient objects in 3D object detection. The network includes feature interaction and edge feature enhancement modules, as well as a global information guide integration module. The experimental results show that our method outperforms 19 other methods on multiple evaluation metrics.
With the supplement of texture and geometry cues in depth maps, salient object detection (SOD) shifts from 2D to 3D, aiming to detect the most attractive object in a pair of color and depth images. Previous work primarily focused on regional integrity. Few methods are used to improve the edge quality of prediction results, resulting in a final prediction with a complete structure but blurred edges. Moreover, due to the complexity of real-life sce-narios, the problem of effectively separating the salient object from complex background has become a hot potato. Aiming to address these issues, we propose a novel network, EGA-Net, to improve the edge quality and highlight the main features of the salient object. Specifically, in the EGA-Net, we propose a feature interaction (FI) module and an edge fea-ture enhancement (EFE) module, respectively. Among them, the FI module is used to remove unimodal feature redundancy, capture multi-modal feature complementarity, and reduce the contamination of low-quality depth maps. The EFE is used to improve the edge quality of the final salient object prediction results. Furthermore, a Global Information Guide Integration (GIGI) module has been proposed to suppress the back-ground noise and effectively highlight the salient objects' main features. It uses interleav-ing and fusion methods to automatically select and enhance the vital information in the original input features under the guidance of global features. We put the training of EGA-Net under the supervision of a new hybrid loss function that can simultaneously take global pixel point, foreground, and depth map quality into account. Quantitative and qual-itative experiment results demonstrate that our method outperforms the 19 advanced methods on eight publicly available RGB-D salient object detection datasets with five eval-uation metrics. You can find the code and results of our method athttps://github.com/ guanyuzong/EGA-Net.(c) 2023 Elsevier Inc. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available