期刊
IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY
卷 17, 期 -, 页码 2272-2283出版社
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIFS.2022.3183410
关键词
Face recognition; Measurement; Representation learning; Feature extraction; Training; Optimization; Generative adversarial networks; Face recognition; metric learning; pose-invariant features learning
资金
- National Natural Science Foundation of China [62171251]
- Natural Science Foundation of Guangdong Province [2020A1515010711]
- Special Foundations for the Development of Strategic Emerging Industries of Shenzhen [JCYJ20200109143010272, JCYJ20200109143035495, CJGJZD20210408092804011, JSGG20211108092812020]
- Oversea Cooperation Foundation of Tsinghua
A novel Frontal-Centers Guided Loss (FCGFace) is proposed for face recognition, which achieves better performance in handling profile faces. Compared to existing methods, FCGFace takes viewpoints into consideration and can adaptively adjust feature distribution to form compact identity clusters.
In recent years, face recognition has made a remarkable breakthrough due to the emergence of deep learning. However, compared with frontal face recognition, plenty of deep face recognition models still suffer serious performance degradation when handling profile faces. To address this issue, we propose a novel Frontal-Centers Guided Loss (FCGFace) to obtain highly discriminative features for face recognition. Most existing discriminative feature learning approaches project features from the same class into a separated latent subspace. These methods only model the distribution at the identity-level but ignore the latent relationship between frontal and profile viewpoints. Different from these methods, FCGFace takes viewpoints into consideration by modeling the distribution at both the identity-level and the viewpoint-level. At the identity-level, a softmax-based loss is employed for a relatively rough classification. At the viewpoint-level, centers of frontal face features are defined to guide the optimization conducted in a more refined way. Specifically, our FCGFace is capable of adaptively adjusting the distribution of profile face features and narrowing the gap between them and frontal face features during different training stages to form compact identity clusters. Extensive experimental results on popular benchmarks, including cross-pose datasets (CFP-FP, CPLFW, VGGFace2-FP, and Multi-PIE) and non-cross-pose datasets (YTF, LFW, AgeDB-30, CALFW, IJB-B, IJB-C, and RFW), have demonstrated the superiority of our FCGFace over the SOTA competitors.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据