4.7 Article

Deep Visual Attention Prediction

Journal

IEEE TRANSACTIONS ON IMAGE PROCESSING
Volume 27, Issue 5, Pages 2368-2378

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIP.2017.2787612

Keywords

Visual attention; convolutional neural network; saliency detection; deep learning; human eye fixation

Funding

  1. Beijing Natural Science Foundation [4182056]
  2. National Basic Research Program of China (973 Program) [2013CB328805]
  3. National Natural Science Foundation of China [61272359]
  4. Fok Ying-Tong Education Foundation for Young Teachers
  5. Specialized Fund for Joint Building Program of Beijing Municipal Education Commission

Ask authors/readers for more resources

In this paper, we aim to predict human eye fixation with view-free scenes based on an end-to-end deep learning architecture. Although convolutional neural networks (CNNs) have made substantial improvement on human attention prediction, it is still needed to improve the CNN-based attention models by efficiently leveraging multi-scale features. Our visual attention network is proposed to capture hierarchical saliency information from deep, coarse layers with global saliency information to shallow, fine layers with local saliency response. Our model is based on a skip-layer network structure, which predicts human attention from multiple convolutional layers with various reception fields. Final saliency prediction is achieved via the cooperation of those global and local predictions. Our model is learned in a deep supervision manner, where supervision is directly fed into multi-level layers, instead of previous approaches of providing supervision only at the output layer and propagating this supervision back to earlier layers. Our model thus incorporates multi-level saliency predictions within a single network, which significantly decreases the redundancy of previous approaches of learning multiple network streams with different input scales. Extensive experimental analysis on various challenging benchmark data sets demonstrate our method yields the state-of-the-art performance with competitive inference time.(1)

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available