4.7 Article

Domain Adaptive Ensemble Learning

Journal

IEEE TRANSACTIONS ON IMAGE PROCESSING
Volume 30, Issue -, Pages 8008-8018

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIP.2021.3112012

Keywords

Adaptation models; Training; Collaboration; Feature extraction; Computational modeling; Head; Neural networks; Domain adaptation; domain generalization; collaborative ensemble learning

Funding

  1. National Natural Science Foundation of China [61876176, U1713208]
  2. National Key Research and Development Program of China [2020YFC2004800]
  3. Science and Technology Service Network Initiative of Chinese Academy of Sciences [KFJ-STS-QYZX-092]
  4. Shanghai Committee of Science and Technology, China [20DZ1100800]

Ask authors/readers for more resources

The study focuses on generalizing deep neural networks from multiple source domains to a target domain, and proposes a unified framework called DAEL, which aims to improve accuracy on unseen target domains by collaboratively learning experts.
The problem of generalizing deep neural networks from multiple source domains to a target one is studied under two settings: When unlabeled target data is available, it is a multi-source unsupervised domain adaptation (UDA) problem, otherwise a domain generalization (DG) problem. We propose a unified framework termed domain adaptive ensemble learning (DAEL) to address both problems. A DAEL model is composed of a CNN feature extractor shared across domains and multiple classifier heads each trained to specialize in a particular source domain. Each such classifier is an expert to its own domain but a non-expert to others. DAEL aims to learn these experts collaboratively so that when forming an ensemble, they can leverage complementary information from each other to be more effective for an unseen target domain. To this end, each source domain is used in turn as a pseudo-target-domain with its own expert providing supervisory signal to the ensemble of non-experts learned from the other sources. To deal with unlabeled target data under the UDA setting where real expert does not exist, DAEL uses pseudo labels to supervise the ensemble learning. Extensive experiments on three multi-source UDA datasets and two DG datasets show that DAEL improves the state of the art on both problems, often by significant margins.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available