Journal
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
Volume 32, Issue 10, Pages 7019-7032Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCSVT.2022.3179021
Keywords
Adaptation models; Compounds; Training; Semantics; Data models; Image segmentation; Transfer learning; Semantic segmentation; open compound domain adaptation; source-free domain adaptation
Categories
Funding
- European Union [951911]
- Progetti di Ricerca di Interesse Nazionale Project CREATIVE Prot [2020ZSL9F9]
- National Research Foundation, Singapore, through the AI Singapore Program [AISG2-RP-2020-016]
- Singapore Ministry of Education [MOE-T2EP20120-0011]
Ask authors/readers for more resources
In this work, the authors introduce a new concept called source-free open compound domain adaptation (SF-OCDA) and apply it to semantic segmentation. SF-OCDA is a more challenging but practical approach that addresses data privacy and storage issues, as well as multiple target domains and unseen open domains. The authors propose a framework that achieves state-of-the-art results on the C-Driving dataset and demonstrates leading performance on CityScapes for domain generalization. They also introduce a method called Cross-Patch Style Swap (CPSS) to improve model performance by diversifying samples and reducing the influence of noisy pseudo-labels.
In this work, we introduce a new concept, named source-free open compound domain adaptation (SF-OCDA), and study it in semantic segmentation. SF-OCDA is more challenging than the traditional domain adaptation but it is more practical. It jointly considers (1) the issues of data privacy and data storage and (2) the scenario of multiple target domains and unseen open domains. In SF-OCDA, only the source pre-trained model and the target data are available to learn the target model. The model is evaluated on the samples from the target and unseen open domains. To solve this problem, we present an effective framework by separating the training process into two stages: (1) pre-training a generalized source model and (2) adapting a target model with self-supervised learning. In our framework, we propose the Cross-Patch Style Swap (CPSS) to diversify samples with various patch styles in the feature-level, which can benefit the training of both stages. First, CPSS can significantly improve the generalization ability of the source model, providing more accurate pseudo-labels for the latter stage. Second, CPSS can reduce the influence of noisy pseudo-labels and also avoid the model overfitting to the target domain during self-supervised learning, consistently boosting the performance on the target and open domains. Experiments demonstrate that our method produces state-of-the-art results on the C-Driving dataset. Furthermore, our model also achieves the leading performance on CityScapes for domain generalization.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available