Journal
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
Volume 31, Issue 2, Pages 728-741Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCSVT.2020.2988768
Keywords
Feature extraction; Saliency detection; Object detection; Semantics; Benchmark testing; Visualization; Convolutional neural networks; Saliency detection; deep subregion learning; region dilated blocks; parallel atrous spatial pyramid pooling (ASPP) modules
Categories
Funding
- National Natural Science Foundation of China [61671399, 61902275]
- Fundamental Research Funds for the Central Universities [20720190012]
- Education University of Hong Kong [FLASS/DRF/IDS-3]
- Lingnan University, Hong Kong [190-009]
Ask authors/readers for more resources
This research introduces a novel deep sub-region network (DSR-Net) that aggregates multi-scale salient context information to fuse global and local contexts, aiming to improve saliency detection accuracy. Experimental results demonstrate significant performance improvements of this network on commonly used saliency benchmark datasets.
Saliency detection is a fundamental and challenging task in computer vision, which aims at distinguishing the most conspicuous objects or regions in an image. Existing deep-learning methods mainly rely on the entire image to learn the global context information for saliency detection, which loses the spatial relation and results in ambiguity in predicting saliency maps. In this paper, we propose a novel deep sub-region network (DSR-Net) equipped with a sequence of sub-region dilated blocks (SRDB) by aggregating multi-scale salient context information of multiple sub-regions, such that the global context information from the whole image and local contexts from sub-regions are fused together, making the saliency prediction more accurate. Our SRDB separates the input feature map at different layers of a convolutional neural network (CNN) into different sub-regions and then designs a parallel ASPP module to refine feature maps at each sub-region. Experiments on the five widely-used saliency benchmark datasets demonstrate that our network outperforms recent state-of-the-art saliency detectors quantitatively and qualitatively on all the benchmarks.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available