4.7 Article

Guided Co-Segmentation Network for Fast Video Object Segmentation

Journal

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCSVT.2020.3010293

Keywords

Feature extraction; Object segmentation; Task analysis; Motion segmentation; Pipelines; Decoding; Search problems; Video segmentation; co-segmentation; semi-supervised

Funding

  1. National Research Foundation Singapore [AISG-RP-2018-003]
  2. NTU Start-Up Grant
  3. MOE [RG126/17 (S), RG28/18 (S), RG22/19 (S)]

Ask authors/readers for more resources

The study focuses on online semi-supervised video object segmentation and introduces a GCSeg network that achieves state-of-the-art performance by incorporating relationships at different time scales and an adaptive search strategy.
Semi-supervised video object segmentation is a task of propagating instance masks given in the first frame to the entire video. It is a challenging task since it usually suffers from heavy occlusions, large deformation, and large variations of objects. To alleviate these problems, many existing works apply time-consuming techniques such as fine-tuning, post-processing, or extracting optical flow, which makes them intractable for online segmentation. In our work, we focus on online semi-supervised video object segmentation. We propose a GCSeg (Guided Co-Segmentation) Network which is mainly composed of a Reference Module and a Co-segmentation Module, to simultaneously incorporate the short-term, middle-term, and long-term temporal inter-frame relationships. Moreover, we propose an Adaptive Search Strategy to reduce the risk of propagating inaccurate segmentation results in subsequent frames. Our GCSeg network achieves state-of-the-art performance on online semi-supervised video object segmentation on Davis 2016 and Davis 2017 datasets.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available