4.7 Article

Dense and Sparse Reconstruction Error Based Saliency Descriptor

Journal

IEEE TRANSACTIONS ON IMAGE PROCESSING
Volume 25, Issue 4, Pages -

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIP.2016.2524198

Keywords

Saliency detection; dense/sparse reconstruction error; sparse representation; context-based propagation; region compactness; Bayesian integration

Funding

  1. National Natural Science Foundation of China [61528101, 61472060, 61371157]
  2. Direct For Computer & Info Scie & Enginr
  3. Div Of Information & Intelligent Systems [1149783] Funding Source: National Science Foundation

Ask authors/readers for more resources

In this paper, we propose a visual saliency detection algorithm from the perspective of reconstruction error. The image boundaries are first extracted via superpixels as likely cues for background templates, from which dense and sparse appearance models are constructed. First, we compute dense and sparse reconstruction errors on the background templates for each image region. Second, the reconstruction errors are propagated based on the contexts obtained from K-means clustering. Third, the pixel-level reconstruction error is computed by the integration of multi-scale reconstruction errors. Both the pixel-level dense and sparse reconstruction errors are then weighted by image compactness, which could more accurately detect saliency. In addition, we introduce a novel Bayesian integration method to combine saliency maps, which is applied to integrate the two saliency measures based on dense and sparse reconstruction errors. Experimental results show that the proposed algorithm performs favorably against 24 state-of-the-art methods in terms of precision, recall, and F-measure on three public standard salient object detection databases.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available