3.8 Proceedings Paper

Collaborative Optimization and Aggregation for Decentralized Domain Generalization and Adaptation

出版社

IEEE
DOI: 10.1109/ICCV48922.2021.00642

关键词

-

资金

  1. Vision Semantics Limited
  2. Alan Turing Institute Turing Fellowship
  3. Innovate UK Industrial Challenge Project on Developing and Commercialising Intelligent Video Analytics Solutions for Public Safety, Queen Mary University of London Principal's Scholarship [98111-571149]

向作者/读者索取更多资源

In this study, a new approach called COPA is proposed to optimize a generalized target model for decentralized domain generalization and multi-source unsupervised domain adaptation. COPA achieves comparable performance against state-of-the-art methods without the need for centralized data collection by optimizing local models and centrally aggregating feature extractors and classifiers.
Contemporary domain generalization (DG) and multi-source unsupervised domain adaptation (UDA) methods mostly collect data from multiple domains together for joint optimization. However, this centralized training paradigm poses a threat to data privacy and is not applicable when data are non-shared across domains. In this work, we propose a new approach called Collaborative Optimization and Aggregation (COPA), which aims at optimizing a generalized target model for decentralized DG and UDA, where data from different domains are non-shared and private. Our base model consists of a domain-invariant feature extractor and an ensemble of domain-specific classifiers. In an iterative learning process, we optimize a local model for each domain, and then centrally aggregate local feature extractors and assemble domain-specific classifiers to construct a generalized global model, without sharing data from different domains. To improve generalization of feature extractors, we employ hybrid batch-instance normalization and collaboration of frozen classifiers. For better decentralized UDA, we further introduce a prediction agreement mechanism to overcome local disparities towards central model aggregation. Extensive experiments on five DG and UDA benchmark datasets show that COPA is capable of achieving comparable performance against the state-of-the-art DG and UDA methods without the need for centralized data collection in model training.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据