4.6 Article

Discriminant Geometrical and Statistical Alignment With Density Peaks for Domain Adaptation

Journal

IEEE TRANSACTIONS ON CYBERNETICS
Volume 52, Issue 2, Pages 1193-1206

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCYB.2020.2994875

Keywords

Task analysis; Manifolds; Risk management; Interference; Optimization; Cybernetics; Statistical distributions; Domain adaptation (DA); landmark selection; subspace learning; transfer learning (TL)

Funding

  1. Key Program of National Natural Science Foundation of China [61933002]
  2. National Key Research and Development Program of China [2018YFB1309300]

Ask authors/readers for more resources

Unsupervised domain adaptation aims to apply labeled data in the source domain to classification tasks in the target domain. Despite challenges at the feature, instance, and classifier levels, this article proposes a novel method called DGSA that improves performance by aligning geometrical structure and statistical distribution.
Unsupervised domain adaptation (DA) aims to perform classification tasks on the target domain by leveraging rich labeled data in the existing source domain. The key insight of DA is to reduce domain divergence by learning domain-invariant features or transferable instances. Despite its rapid development, there still exist several challenges to explore. At the feature level, aligning both domains only in a single way (i.e., geometrical or statistical) has limited ability to reduce the domain divergence. At the instance level, interfering instances often obstruct learning a discriminant subspace when performing the geometrical alignment. At the classifier level, only minimizing the empirical risk on the source domain may result in a negative transfer. To tackle these challenges, this article proposes a novel DA method, called discriminant geometrical and statistical alignment (DGSA). DGSA first aligns the geometrical structure of both domains by projecting original space into a Grassmann manifold, then matches the statistical distributions of both domains by minimizing their maximum mean discrepancy on the manifold. In the former step, DGSA only selects the density peaks to learn the Grassmann manifold and so to reduce the influences of interfering instances. In addition, DGSA exploits the high-confidence soft labels of target landmarks to learn a more discriminant manifold. In the latter step, a structural risk minimization (SRM) classifier is learned to match the distributions (both marginal and conditional) and predict the target labels at the same time. Extensive experiments on objection recognition and human activity recognition tasks demonstrate that DGSA can achieve better performance than the comparison methods.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available