3.8 Proceedings Paper

Domain Generalization via Gradient Surgery

Publisher

IEEE
DOI: 10.1109/ICCV48922.2021.00656

Keywords

-

Funding

  1. UNL [CAID-0620190100145LI, CAID-50220140100084LI]
  2. ANPCyT (PICT)
  3. Argentina's National Scientific and Technical Research Council (CONICET)

Ask authors/readers for more resources

In real-life applications, machine learning models often face scenarios where there is a change in data distribution between training and test domains. We propose novel gradient agreement strategies based on gradient surgery to alleviate conflicting gradients and enhance the generalization capability of deep learning models in domain shift scenarios.
In real-life applications, machine learning models often face scenarios where there is a change in data distribution between training and test domains. When the aim is to make predictions on distributions different from those seen at training, we incur in a domain generalization problem. Methods to address this issue learn a model using data from multiple source domains, and then apply this model to the unseen target domain. Our hypothesis is that when training with multiple domains, conflicting gradients within each mini-batch contain information specific to the individual domains which is irrelevant to the others, including the test domain. If left untouched, such disagreement may degrade generalization performance. In this work, we characterize the conflicting gradients emerging in domain shift scenarios and devise novel gradient agreement strategies based on gradient surgery to alleviate their effect. We validate our approach in image classification tasks with three multi-domain datasets, showing the value of the proposed agreement strategy in enhancing the generalization capability of deep learning models in domain shift scenarios.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available