4.7 Article

A Dilated Inception Network for Visual Saliency Prediction

Journal

IEEE TRANSACTIONS ON MULTIMEDIA
Volume 22, Issue 8, Pages 2163-2176

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TMM.2019.2947352

Keywords

Visualization; Computational modeling; Predictive models; Feature extraction; Spatial resolution; Computer architecture; Solid modeling; Visual attention; saliency detection; eye fixation prediction; convolutional neural networks; dilated convolution; inception module

Funding

  1. Singapore Ministry of Education Tier-2 Fund [MOE2016-T2-2-057(S)]
  2. Natural Science Foundation of China [61901236]
  3. NTU start-up grant
  4. MOE Tier-1 Research Grant [RG126/17 (S)]

Ask authors/readers for more resources

Recently, with the advent of deep convolutional neural networks (DCNN), the improvements in visual saliency prediction research are impressive. One possible direction to approach the next improvement is to fully characterize the multi-scale saliency-influential factors with a computationally-friendly module in DCNN architectures. In this work, we propose an end-to-end dilated inception network (DINet) for visual saliency prediction. It captures multi-scale contextual features effectively with very limited extra parameters. Instead of utilizing parallel standard convolutions with different kernel sizes as the existing inception module, our proposed dilated inception module (DIM) uses parallel dilated convolutions with different dilation rates which can significantly reduce the computation load while enriching the diversity of receptive fields in feature maps. Moreover, the performance of our saliency model is further improved by using a set of linear normalization-based probability distribution distance metrics as loss functions. As such, we can formulate saliency prediction as a global probability distribution prediction task for better saliency inference instead of a pixel-wise regression problem. Experimental results on several challenging saliency benchmark datasets demonstrate that our DINet with proposed loss functions can achieve state-of-the-art performance with shorter inference time.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available