期刊
IEEE TRANSACTIONS ON AFFECTIVE COMPUTING
卷 14, 期 1, 页码 718-731出版社
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TAFFC.2020.3029531
关键词
Faces; Image recognition; Face recognition; Feature extraction; Training; Adaptation models; Generative adversarial networks; Cross-view facial expression recognition; domain adaptation; generative adversarial network; unsupervised learning; semi-supervised learning
We propose an unsupervised cross-view facial expression adaptation network (UCFEAN) that can generate and recognize cross-view facial expressions in images in an unsupervised manner. UCFEAN converts the unsupervised domain adaptation between two image spaces into semi-supervised learning in feature spaces. It uses a generative adversarial network to perform cyclic image generation and project unlabelled target images and labelled source images to the corresponding feature spaces. The proposed method achieves realistic target image generation and high precision recognition of cross-view facial expressions.
We propose an unsupervised cross-view facial expression adaptation network (UCFEAN) to simultaneously generate and recognize cross-view facial expressions in images in an unsupervised manner. The main idea of UCFEAN is to convert the unsupervised domain adaptation between two image spaces with different appearance into semi-supervised learning (SSL) in feature spaces with the same semantic content. The cyclic image generation of cross-view facial expressions based on the generative adversarial network (GAN) is carried out to project unlabelled target images and labelled source images to the corresponding feature spaces with the same semantic content. This helps realize the unsupervised feature learning of the target image. Labels of facial expressions represented in the projected target features can then be learned using the projected source features, because the distributions of the projected features in the two domains are close enough for knowledge transfer by using SSL. Three techniques are developed to train UCFEAN in an effective and stable manner. Extensive experiments are conducted to evaluate the UCFEAN on two multi-view facial expression image databases including RaFD and Multi-PIE. The results show that the proposed method can generate realistic target images of the facial expression and recognize cross-view facial expressions with high precision.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据