4.7 Article

Coarse-to-Fine Video Instance Segmentation With Factorized Conditional Appearance Flows

期刊

IEEE-CAA JOURNAL OF AUTOMATICA SINICA
卷 10, 期 5, 页码 1192-1208

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JAS.2023.123456

关键词

Target tracking; Video sequences; Semantics; Object segmentation; Predictive models; Gaussian distribution; Benchmark testing; Embedding learning; generative model; normalizing flows; video instance segmentation (VIS)

向作者/读者索取更多资源

We propose a novel method using a new generative model to automatically detect, segment, and track instances in a video sequence. Our hierarchical structural embedding learning predicts high-quality masks with accurate boundary details using normalizing flows. The method achieves superior performance on video instance segmentation benchmarks and demonstrates generalizability on an unsupervised video object segmentation dataset.
We introduce a novel method using a new generative model that automatically learns effective representations of the target and background appearance to detect, segment and track each instance in a video sequence. Differently from current discriminative tracking-by-detection solutions, our proposed hierarchical structural embedding learning can predict more high-quality masks with accurate boundary details over spatio-temporal space via the normalizing flows. We formulate the instance inference procedure as a hierarchical spatio-temporal embedded learning across time and space. Given the video clip, our method first coarsely locates pixels belonging to a particular instance with Gaussian distribution and then builds a novel mixing distribution to promote the instance boundary by fusing hierarchical appearance embedding information in a coarse-to-fine manner. For the mixing distribution, we utilize a factorization condition normalized flow fashion to estimate the distribution parameters to improve the segmentation performance. Comprehensive qualitative, quantitative, and ablation experiments are performed on three representative video instance segmentation benchmarks (i.e., YouTube-VIS19, YouTube-VIS21, and OVIS) and the effectiveness of the proposed method is demonstrated. More impressively, the superior performance of our model on an unsupervised video object segmentation dataset (i.e., DAVIS(19)) proves its generalizability. Our algorithm implementations are publicly available at https://github.com/zyqin19/HEVis.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据