4.7 Article

SAC-Net: Spatial Attenuation Context for Salient Object Detection

Journal

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCSVT.2020.2995220

Keywords

Attenuation; Feature extraction; Object detection; Aggregates; Semantics; Spatial resolution; Spatial attenuation context; salient object detection; saliency detection; deep learning

Funding

  1. National Natural Science Foundation of China [61902275]
  2. CUHK Direct Grant for Research 2018/19
  3. Hong Kong Ph.D. Fellowship

Ask authors/readers for more resources

This paper introduces a new deep neural network design for salient object detection, which maximizes the integration of local and global image context. By designing a spatial attenuation context (SAC) module and embedding it into a deep network, they are able to train the network end-to-end and optimize context features for detecting salient objects. Experimental results demonstrate that their method outperforms 29 state-of-the-art methods on six common benchmark data, both quantitatively and visually.
This paper presents a new deep neural network design for salient object detection by maximizing the integration of local and global image context within, around, and beyond the salient objects. Our key idea is to adaptively propagate and aggregate the image context features with variable attenuation over the entire feature maps. To achieve this, we design the spatial attenuation context (SAC) module to recurrently translate and aggregate the context features independently with different attenuation factors and then to attentively learn the weights to adaptively integrate the aggregated context features. By further embedding the module to process individual layers in a deep network, namely SAC-Net, we can train the network end-to-end and optimize the context features for detecting salient objects. Compared with 29 state-of-the-art methods, experimental results show that our method performs favorably over all the others on six common benchmark data, both quantitatively and visually.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available