4.7 Article

TSEV-GAN: Generative Adversarial Networks with Target-aware Style Encoding and Verification for facial makeup transfer

期刊

KNOWLEDGE-BASED SYSTEMS
卷 257, 期 -, 页码 -

出版社

ELSEVIER
DOI: 10.1016/j.knosys.2022.109958

关键词

Generative Adversarial Networks; Makeup transfer; Style verification; Image translation

资金

  1. National Natural Science Foundation of China [62072189]
  2. Research Grants Council of the Hong Kong Special Administration Region [CityU 11201220]
  3. Natural Science Foundation of Guangdong Province, China [2022A1515011160]

向作者/读者索取更多资源

This paper introduces a GAN-based generative model for accurately extracting and transferring makeup styles from reference facial images to target faces. The proposed model utilizes target-aware makeup style encoding and verification, and improves the accuracy and fidelity of makeup transfer through encoding the difference map and learning style consistency.
Generative Adversarial Networks (GANs) have brought great progress in image-to-image translation. The problem that we focus on is how to accurately extract and transfer the makeup style from a reference facial image to a target face. We propose a GAN-based generative model with Target-aware makeup Style Encoding and Verification, which is referred to as TSEV-GAN. This design is due to the following two insights: (a) When directly encoding the reference image, the encoder may focus on regions which are not necessarily important or desirable. To precisely capture the style, we encode the difference map between the reference and corresponding de-makeup images, and then inject the obtained style code into a generator. (b) A generic real-fake discriminator cannot ensure the correctness of the rendered makeup pattern. In view of this, we impose style representation learning on a conditional discriminator. By identifying style consistency between the reference and synthesized images, the generator is induced to precisely replicate the desirable makeup. We perform extensive experiments on the existing makeup benchmarks to verify the effectiveness of our improvement strategies in transferring a variety of makeup styles. Moreover, the proposed model is able to outperform other existing state-of-the-art makeup transfer methods in terms of makeup similarity and irrelevant content preservation. (c) 2022 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据