3.8 Proceedings Paper

SelfReg: Self-supervised Contrastive Regularization for Domain Generalization

Publisher

IEEE
DOI: 10.1109/ICCV48922.2021.00948

Keywords

-

Funding

  1. Institute of Information & Communications Technology Planning & Evaluation(IITP) - Korea government(MSIT)
  2. National Research Foundation of Korea [NRF-2021R1C1C1009608]
  3. Basic Science Research Program [NRF-2021R1A6A1A13044830]
  4. ICT Creative Consilience program [IITP-2021-2020-0-01819]

Ask authors/readers for more resources

Domain generalization aims to improve the model's generalization performance by extracting domain-invariant features to address domain shift. Recent contrastive learning-based domain generalization approaches have shown promising results, but their performance heavily relies on the quality and quantity of negative data pairs. To address this issue, a new regularization method called SelfReg is proposed, which uses only positive data pairs to improve performance effectively.
In general, an experimental environment for deep learning assumes that the training and the test dataset are sampled from the same distribution. However, in real-world situations, a difference in the distribution between two datasets, i.e. domain shift, may occur, which becomes a major factor impeding the generalization performance of the model. The research field to solve this problem is called domain generalization, and it alleviates the domain shift problem by extracting domain-invariant features explicitly or implicitly. In recent studies, contrastive learning-based domain generalization approaches have been proposed and achieved high performance. These approaches require sampling of the negative data pair. However, the performance of contrastive learning fundamentally depends on quality and quantity of negative data pairs. To address this issue, we propose a new regularization method for domain generalization based on contrastive learning, called self-supervised contrastive regularization (SelfReg). The proposed approach use only positive data pairs, thus it resolves various problems caused by negative pair sampling. Moreover, we propose a class-specific domain perturbation layer (CDPL), which makes it possible to effectively apply mixup augmentation even when only positive data pairs are used. The experimental results show that the techniques incorporated by SelfReg contributed to the performance in a compatible manner. In the recent benchmark, DomainBed, the proposed method shows comparable performance to the conventional state-of-the-art alternatives.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available