4.7 Article

Video Saliency Detection via Sparsity-Based Reconstruction and Propagation

Journal

IEEE TRANSACTIONS ON IMAGE PROCESSING
Volume 28, Issue 10, Pages 4819-4831

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIP.2019.2910377

Keywords

Video saliency detection; sparse reconstruction; color and motion prior; forward-backward propagation; global optimization

Funding

  1. National Natural Science Foundation of China [61722112, 61520106002, 61731003, 61332016, 61620106009, U1636214, 61602344]
  2. National Key Research and Development Program of China [2017YFB1002900]
  3. Key Research Program of Frontier Sciences, CAS [QYZDJ-SSW-SYS013]

Ask authors/readers for more resources

Video saliency detection aims to continuously discover the motion-related salient objects from the video sequences. Since it needs to consider the spatial and temporal constraints jointly, video saliency detection is more challenging than image saliency detection. In this paper, we propose a new method to detect the salient objects in video based on sparse reconstruction and propagation. With the assistance of novel static and motion priors, a single-frame saliency model is first designed to represent the spatial saliency in each individual frame via the sparsity-based reconstruction. Then, through a progressive sparsity-based propagation, the sequential correspondence in the temporal space is captured to produce the inter-frame saliency map. Finally, these two maps are incorporated into a global optimization model to achieve spatio-temporal smoothness and global consistency of the salient object in the whole video. The experiments on three large-scale video saliency datasets demonstrate that the proposed method outperforms the state-of-the-art algorithms both qualitatively and quantitatively.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available