4.7 Article

Coarse-to-Fine Video Instance Segmentation With Factorized Conditional Appearance Flows

Journal

IEEE-CAA JOURNAL OF AUTOMATICA SINICA
Volume 10, Issue 5, Pages 1192-1208

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JAS.2023.123456

Keywords

Target tracking; Video sequences; Semantics; Object segmentation; Predictive models; Gaussian distribution; Benchmark testing; Embedding learning; generative model; normalizing flows; video instance segmentation (VIS)

Ask authors/readers for more resources

We propose a novel method using a new generative model to automatically detect, segment, and track instances in a video sequence. Our hierarchical structural embedding learning predicts high-quality masks with accurate boundary details using normalizing flows. The method achieves superior performance on video instance segmentation benchmarks and demonstrates generalizability on an unsupervised video object segmentation dataset.
We introduce a novel method using a new generative model that automatically learns effective representations of the target and background appearance to detect, segment and track each instance in a video sequence. Differently from current discriminative tracking-by-detection solutions, our proposed hierarchical structural embedding learning can predict more high-quality masks with accurate boundary details over spatio-temporal space via the normalizing flows. We formulate the instance inference procedure as a hierarchical spatio-temporal embedded learning across time and space. Given the video clip, our method first coarsely locates pixels belonging to a particular instance with Gaussian distribution and then builds a novel mixing distribution to promote the instance boundary by fusing hierarchical appearance embedding information in a coarse-to-fine manner. For the mixing distribution, we utilize a factorization condition normalized flow fashion to estimate the distribution parameters to improve the segmentation performance. Comprehensive qualitative, quantitative, and ablation experiments are performed on three representative video instance segmentation benchmarks (i.e., YouTube-VIS19, YouTube-VIS21, and OVIS) and the effectiveness of the proposed method is demonstrated. More impressively, the superior performance of our model on an unsupervised video object segmentation dataset (i.e., DAVIS(19)) proves its generalizability. Our algorithm implementations are publicly available at https://github.com/zyqin19/HEVis.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available