4.7 Article

Robust Video Object Cosegmentation

期刊

IEEE TRANSACTIONS ON IMAGE PROCESSING
卷 24, 期 10, 页码 3137-3148

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIP.2015.2438550

关键词

Video object co-segmentation; energy optimization; object refinement; spatio-temporal scale-invariant feature transform (SIFT) flow

资金

  1. National Basic Research Program of China (973 Program) [2013CB328805]
  2. National Natural Science Foundation of China [61272359, 61125106]
  3. Australian Research Council through the Discovery Projects Funding Scheme [DP150104645]
  4. Key Research Program through the Chinese Academy of Sciences [KGZDEW-T03]
  5. Specialized Fund for Joint Building Program of Beijing Municipal Education Commission

向作者/读者索取更多资源

With ever-increasing volumes of video data, automatic extraction of salient object regions became even more significant for visual analytic solutions. This surge has also opened up opportunities for taking advantage of collective cues encapsulated in multiple videos in a cooperative manner. However, it also brings up major challenges, such as handling of drastic appearance, motion pattern, and pose variations, of foreground objects as well as indiscriminate backgrounds. Here, we present a cosegmentation framework to discover and segment out common object regions across multiple frames and multiple videos in a joint fashion. We incorporate three types of cues, i.e., intraframe saliency, interframe consistency, and across-video similarity into an energy optimization framework that does not make restrictive assumptions on foreground appearance and motion model, and does not require objects to be visible in all frames. We also introduce a spatio-temporal scale-invariant feature transform (SIFT) flow descriptor to integrate across-video correspondence from the conventional SIFT-flow into interframe motion flow from optical flow. This novel spatio-temporal SIFT flow generates reliable estimations of common foregrounds over the entire video data set. Experimental results show that our method outperforms the state-of-the-art on a new extensive data set (ViCoSeg).

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据