Journal
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021)
Volume -, Issue -, Pages 15479-15489Publisher
IEEE
DOI: 10.1109/ICCV48922.2021.01521
Keywords
-
Funding
- Woven Core, Inc.
Ask authors/readers for more resources
The study introduces a novel label propagation method that combines semantic and geometric cues to efficiently auto-label videos. Experimental results on the ApolloScape dataset show a significant improvement in mIoU. In addition, training with auto-labelled frames leads to competitive results on other semantic segmentation benchmarks.
Deep learning models for semantic segmentation rely on expensive, large-scale, manually annotated datasets. Labelling is a tedious process that can take hours per image. Automatically annotating video sequences by propagating sparsely labeled frames through time is a more scalable alternative. In this work, we propose a novel label propagation method, termed Warp-Refine Propagation, that combines semantic cues with geometric cues to efficiently auto-label videos. Our method learns to refine geometrically-warped labels and infuse them with learned semantic priors in a semisupervised setting by leveraging cycle-consistency across time. We quantitatively show that our method improves label-propagation by a noteworthy margin of 13.1 mIoU on the ApolloScape dataset. Furthermore, by training with the auto-labelled frames, we achieve competitive results on three semantic-segmentation benchmarks, improving the state-of-the-art by a large margin of 1.8 and 3.61 mIoU on NYU-V2 and KITTI, while matching the current best results on Cityscapes.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available