4.6 Article

New Contour Cue-Based Hybrid Sparse Learning for Salient Object Detection

Journal

IEEE TRANSACTIONS ON CYBERNETICS
Volume 51, Issue 8, Pages 4212-4226

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCYB.2018.2881482

Keywords

Object detection; Biological system modeling; Visualization; Optimization; Computational modeling; Saliency detection; Collaborative filtering; contour saliency cue; hybrid sparse learning; radar target detection; salient object detection

Funding

  1. National Natural Science Foundation of China [91438103, 61771376, 61771380, 61703328, 61836009, 91438201, U1730109, U1701267]
  2. Equipment Pre-Research Project of the 13th Five-Years Plan [6140137050206, 414120101026, 6140312010103, 6141A020223, 6141B06160301, 6141B07090102]
  3. Major Research Plan in Shaanxi Province of China [2017ZDXM-GY-103, 2017ZDCXL-GY-03-02]
  4. Foundation of the State Key Laboratory of CEMEE [2018K0101B]

Ask authors/readers for more resources

A hybrid saliency model proposed in this paper fuses contour cues and cues from different domains for robust salient object detection. Compared with traditional methods, this model has better modeling capability in diversified scenes and has been demonstrated to be superior in experiments.
Saliency detection is a hot topic in recent years and much efforts have been made to address it from different perspectives. However, current saliency models cannot meet the needs for diversified scenes due to their limited generalization capability. To tackle this problem, in this paper, we propose a hybrid saliency model, which can fuse heterogeneous visual cues for robust salient object detection. A new contour cue is first introduced to provide discriminative saliency information for scene description. Its realization is based on a discrete optimization objective and can be solved efficiently with an iterative algorithm. Followed by this, the contour cue is taken as a part of a hybrid sparse learning model, in which cues from different domains can interact and complement with each other for joint saliency fusion. This saliency fusion model is parameter-free and its numerical solution can be obtained using gradient descent methods. Finally, we advance an object proposal-based collaborative filtering strategy to generate high quality saliency maps from the above fusion results. Compared with traditional methods, the proposed saliency model can fuse heterogeneous cues in a unified optimization framework rather than combine them separately. Therefore, it has favorable modeling capability under diversified scenes where the saliency patterns appear quite differently. To verify the effectiveness of the proposed method, we take experiments on four large saliency benchmark datasets and compare it with other 26 state-of-the-art saliency models. Both qualitative and quantitative evaluation results indicate the superiority of our method, especially in challenging situations. Besides, we apply our saliency model to ship detection of radar platforms and promising results are obtained over traditional detectors.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available