期刊
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
卷 28, 期 8, 页码 1727-1736出版社
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCSVT.2017.2701279
关键词
Video co-saliency; video co-segmentation
资金
- National Basic Research Program of China, 973 Program [2013CB328805]
- National Natural Science Foundation of China [61272359, 61379087, 61602183]
- UGC Direct Grant for Research [4055060]
- Fok Ying-Tong Education Foundation for Young Teachers
- Specialized Fund for Joint Building Program of Beijing Municipal Education Commission
We introduce the term video co-saliency to denote the task of extracting the common noticeable, or salient, regions from multiple relevant videos. The proposed video co-saliency approach accounts for both inter-video foreground correspondences and intra-video saliency stimuli to emphasize the salient foreground regions of video frames and, at the same time, disregard irrelevant visual information of the background. Compared with image co-saliency, it is more reliable due to the utilization of temporal information of video sequence. Benefiting from the discriminability of video co-saliency, we present a unified framework for segmenting out the common salient regions of relevant videos, guided by video co-saliency prior. Unlike naive video co-segmentation approaches employing simple color differences and local motion features, the presented video co-saliency provides a more powerful indicator for the common salient regions, thus conducting video co-segmentation efficiently. Extensive experiments show that the proposed method successfully infers video co-saliency and extracts the common salient regions, outperforming the state-of-the-art methods.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据