3.8 Proceedings Paper

Learning Unsupervised Video Object Segmentation through Visual Attention

出版社

IEEE COMPUTER SOC
DOI: 10.1109/CVPR.2019.00318

关键词

-

资金

  1. Beijing Natural Science Foundation [4182056]
  2. Fok Ying Tung Education Foundation [141067]
  3. Specialized Fund for Joint Building Program of Beijing Municipal Education Commission

向作者/读者索取更多资源

This paper conducts a systematic study on the role of visual attention in the Unsupervised Video Object Segmentation (UVOS) task. By elaborately annotating three popular video segmentation datasets (DAVIS(16), Youtube-Objects and SegTrack(v2)) with dynamic eye-tracking data in the UVOS setting, for the first time, we quantitatively verified the high consistency of visual attention behavior among human observers, and found strong correlation between human attention and explicit primary object judgements during dynamic, task-driven viewing. Such novel observations provide an in-depth insight into the underlying rationale behind UVOS. Inspired by these findings, we decouple UVOS into two sub-tasks: UVOS-driven Dynamic Visual Attention Prediction (DVAP) in spatiotemporal domain, and Attention-Guided Object Segmentation (AGOS) in spatial domain. Our UVOS solution enjoys three major merits: 1) modular training without using expensive video segmentation annotations, instead, using more affordable dynamic fixation data to train the initial video attention module and using existing fixation-segmentation paired static/image data to train the subsequent segmentation module; 2) comprehensive foreground understanding through multi-source learning; and 3) additional interpretability from the biologically-inspired and assessable attention. Experiments on popular benchmarks show that, even without using expensive video object mask annotations, our model achieves compelling performance in comparison with state-of-the-arts.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据