4.7 Article

Activity guided multi-scales collaboration based on scaled-CNN for saliency prediction

Journal

IMAGE AND VISION COMPUTING
Volume 114, Issue -, Pages -

Publisher

ELSEVIER
DOI: 10.1016/j.imavis.2021.104267

Keywords

Saliency prediction; Convolutional neural networks; Human eye fixations; Deep learning

Funding

  1. National Natural Science Foundation of China [51774281]

Ask authors/readers for more resources

This study introduces a lightweight saliency prediction model based on convolutional neural networks, utilizing multi-scale collaboration learning of global and local information, achieving competitive and consistent results on challenging benchmark datasets with better prediction performance, fewer parameters, and faster inference speed.
Visual saliency prediction has achieved significant improvements with the advent of convolutional neural networks, but the breakthrough in saliency prediction accuracy comes at the high computational cost. In this paper, we present a lightweight saliency prediction model based on scaled up convolutional neural networks (CNN), utilizing image activity guided collaboration learning of global and local information at multiple scales. we use a pseudo-siamese network with a scaled up network (EfficientNet) as the backbone, and the two branches of the network respectively capture the global saliency feature and high-level local feature. Concretely, we first utilize the image complexity-related activity features (Image Activity Measure) as our low-level local salience prior, and then feed the input images and the activity maps to scaled up CNN modules to further learn high-level features in a multi-scale collaboration manner. Through extensive evaluation, we show that the proposed method exhibits competitive and consistent results on the challenging benchmark datasets, and our method has better prediction performance, fewer trainable parameters and faster inference speed. Moreover, the proposed model has low requirements for platform computing capabilities, which improves the universality of saliency application scenarios. (c) 2021 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available