3.8 Proceedings Paper

Flow Guided Recurrent Neural Encoder for Video Salient Object Detection

Publisher

IEEE
DOI: 10.1109/CVPR.2018.00342

Keywords

-

Funding

  1. State Key Development Program [2016YFB1001004]
  2. National Natural Science Foundation of China [61702565]
  3. Guangdong Natural Science Foundation Project for Research Teams [2017A030312006]
  4. CCF-Tencent Open Research Fund

Ask authors/readers for more resources

Image saliency detection has recently witnessed significant progress due to deep convolutional neural networks. However, extending state-of-the-art saliency detectors from image to video is challenging. The performance of salient object detection suffers from object or camera motion and the dramatic change of the appearance contrast in videos. In this paper, we present flow guided recurrent neural encoder (FGRNE), an accurate and end-to-end learning framework for video salient object detection. It works by enhancing the temporal coherence of the per-frame feature by exploiting both motion information in terms of optical flow and sequential feature evolution encoding in terms of LSTM networks. It can be considered as a universal framework to extend any FCN based static saliency detector to video salient object detection. Intensive experimental results verify the effectiveness of each part of FGRNE and confirm that our proposed method significantly outperforms state-of-the-art methods on the public benchmarks of DAVIS and FBMS.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available