4.6 Article

PGM-face: Pose-guided margin loss for cross-pose face recognition

Journal

NEUROCOMPUTING
Volume 460, Issue -, Pages 154-165

Publisher

ELSEVIER
DOI: 10.1016/j.neucom.2021.07.006

Keywords

Cross-pose face recognition; Representation learning; Loss function

Funding

  1. National Natural Science Foundation of China [U1833128, 61703077]
  2. Chengdu Key Research and Development Support Program [2019-YF09-00129-GX]

Ask authors/readers for more resources

The proposed Pose-Guided Margin Loss (PGM-Face) and Pose-Guided Representation Transfer Network (PGRT-Net) enable learning more separable face features under arbitrary head poses in cross-pose face recognition, leading to improved performance compared to traditional methods.
Cross-pose face recognition has been a challenging task due to the diversity and arbitrariness of the head pose. Current methods addressing this task can be divided into two categories. One is learning pose robust face representation, and another is rotating the faces via face synthesis. Unlike the common methods, our proposed Pose-Guided Margin Loss (PGM-Face) extends the dimensions of the linear transformation matrix for each class to estimate the head poses, therefore the learned features of each class are soft clustered guided by the head pose, and the inter margin of two classes under the same pose is larger than where the face features are erratic (see Fig. 1). The similarity of the learned features is measured after transformed into the same target pose via our proposed Pose-Guided Representation Transfer Network (PGRT-Net). Compared with pose-robust face representation learning methods, our method learns more separable face features under the arbitrarily specified poses. Compared with face rotation methods, our proposed method rotates the face features instead of the face images, which can reduce the loss of identity information during the synthesis. Quantitative and qualitative experiments on several challenging databases for verification and recognition show that the proposed method achieves state-of-the-art performance, and its individual components yield substantial improvements. (c) 2021 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available