3.8 Proceedings Paper

Collaborative Optimization and Aggregation for Decentralized Domain Generalization and Adaptation

Publisher

IEEE
DOI: 10.1109/ICCV48922.2021.00642

Keywords

-

Funding

  1. Vision Semantics Limited
  2. Alan Turing Institute Turing Fellowship
  3. Innovate UK Industrial Challenge Project on Developing and Commercialising Intelligent Video Analytics Solutions for Public Safety, Queen Mary University of London Principal's Scholarship [98111-571149]

Ask authors/readers for more resources

In this study, a new approach called COPA is proposed to optimize a generalized target model for decentralized domain generalization and multi-source unsupervised domain adaptation. COPA achieves comparable performance against state-of-the-art methods without the need for centralized data collection by optimizing local models and centrally aggregating feature extractors and classifiers.
Contemporary domain generalization (DG) and multi-source unsupervised domain adaptation (UDA) methods mostly collect data from multiple domains together for joint optimization. However, this centralized training paradigm poses a threat to data privacy and is not applicable when data are non-shared across domains. In this work, we propose a new approach called Collaborative Optimization and Aggregation (COPA), which aims at optimizing a generalized target model for decentralized DG and UDA, where data from different domains are non-shared and private. Our base model consists of a domain-invariant feature extractor and an ensemble of domain-specific classifiers. In an iterative learning process, we optimize a local model for each domain, and then centrally aggregate local feature extractors and assemble domain-specific classifiers to construct a generalized global model, without sharing data from different domains. To improve generalization of feature extractors, we employ hybrid batch-instance normalization and collaboration of frozen classifiers. For better decentralized UDA, we further introduce a prediction agreement mechanism to overcome local disparities towards central model aggregation. Extensive experiments on five DG and UDA benchmark datasets show that COPA is capable of achieving comparable performance against the state-of-the-art DG and UDA methods without the need for centralized data collection in model training.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available