4.7 Article

Robust Object Co-Segmentation Using Background Prior

期刊

IEEE TRANSACTIONS ON IMAGE PROCESSING
卷 27, 期 4, 页码 1639-1651

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIP.2017.2781424

关键词

Object co-segmentation; background prior; self-learned graph; manifold ranking

资金

  1. National Science Foundation of China [61473231]

向作者/读者索取更多资源

Given a set of images that contain objects from a common category, object co-segmentation aims at automatically discovering and segmenting such common objects from each image. During the past few years, object co-segmentation has received great attention in the computer vision community. However, the existing approaches are usually designed with misleading assumptions, unscalable priors, or subjective computational models, which do not have sufficient robustness for dealing with complex and unconstrained real-world image contents. This paper proposes a novel two-stage co-segmentation framework, mainly for addressing the robustness issue. In the proposed framework, we first introduce the concept of union background and use it to improve the robustness for suppressing the image backgrounds contained by the given image groups. Then, we also weaken the requirement for the strong prior knowledge by using the background prior instead. This can improve the robustness when scaling up for the unconstrained image contents. Based on the weak background prior, we propose a novel MR-SGS model, i.e., manifold ranking with the self-learned graph structure, which can infer suitable graph structures in a data-driven manner rather than building the fixed graph structure relying on the subjective design. Such capacity is critical for further improving the robustness in inferring the foreground/background probability of each image pixel. Comprehensive experiments and comparisons with other state-of-the-art approaches can demonstrate the effectiveness of the proposed work.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据