4.6 Article

Gallery-sensitive single sample face recognition based on domain adaptation

Journal

NEUROCOMPUTING
Volume 458, Issue -, Pages 626-638

Publisher

ELSEVIER
DOI: 10.1016/j.neucom.2020.06.136

Keywords

Domain adaptation; Discriminative analysis; Single sample face recognition; Transfer learning; Gallery-sensitive

Funding

  1. National Natural Science Foundation of China [61866007, 61662014, 61876010]
  2. Natural Science Foundation of Guangxi District [2018GXNSFDA138006]
  3. Guangxi Key Laboratory of Trusted Software [KX201721]
  4. Collaborative Innovation Center of Cloud Computing and Big Data [YD16E12]
  5. Image Intelligent Processing Project of Key Laboratory Fund [GIIP2005]

Ask authors/readers for more resources

This paper introduces a new method, Gallery-Sensitive Single Sample Face Recognition based on Domain Adaptation (GS-DA), which effectively leverages an unlabeled target training dataset, a labeled source training dataset, and a gallery dataset to enhance the performance of SSFR.
Taking advantage of labeled auxiliary training data whose distribution is similar to the distribution of the gallery, single sample face recognition (SSFR) has achieved encouraging performance. However, in many real-world applications, it is difficult to collect such an auxiliary training dataset, while it may be easier to collect an unlabeled target training dataset whose distribution is similar to the distribution of the gallery and a labeled source training dataset whose distribution may be different to the distribution of the gal-lery. How can these three datasets be effectively leveraged to handle SSFR? To address this issue, this paper proposes a new method of Gallery-Sensitive Single Sample Face Recognition based on Domain Adaptation (GS-DA). First, GS-DA employs the method of TSD (targetize the source domain) to construct a common subspace and a targetized source domain. Secondly, it projects each gallery image into the common subspace and obtains the sparse representation of each gallery image in the common subspace. Thirdly, it reconstructs each gallery image from the targetized source domain to estimate the within-class scatter matrix and the between-class scatter matrix of the gallery. Lastly, it learns a discriminant model by maximizing the sum of the traces of the between-class scatter matrix of the gallery and the between-class scatter matrix of the targetized source domain as well as minimizing the sum of the traces of the total scatter matrix of the gallery and the total scatter matrix of the target training data. The experimental results on five datasets illustrate the superiority of GS-DA in leveraging these three datasets for SSFR. (c) 2020 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available