4.7 Article

Covered Style Mining via Generative Adversarial Networks for Face Anti-spoofing

Journal

PATTERN RECOGNITION
Volume 132, Issue -, Pages -

Publisher

ELSEVIER SCI LTD
DOI: 10.1016/j.patcog.2022.108957

Keywords

Face anti-spoofing; Generative adversarial networks; Deep learning

Funding

  1. Yunnan Provincial Major Science and Technology Special Plan Projects [202202AD080003]
  2. National Natural Science Foundation of China [62172354]
  3. Program of Yunnan Key Laboratory of Intelligent Systems and Computing [202205AG070 03]

Ask authors/readers for more resources

In this paper, a novel frame-level face anti-spoofing method, Covered Style Mining-GAN (CSM-GAN), is proposed, which converts face anti-spoofing detection into a style transfer process without any prior information. Comprehensive experiments show that the proposed method outperforms current state-of-the-art and produces better visual diversity in difference maps.
Face anti-spoofing, a biometric authentication method, is a central part of automatic face recognition. Recently, two sets of approaches have performed particularly well against presentation attacks: 1) pixel-wise supervision-based methods, which intend to provide fine-grained pixel information to learn specific auxiliary maps; and 2) anomaly detection-based methods, which regard face anti-spoofing as an open-set training task and learn spoof detectors using only bona fide data, where the detectors are shown to generalize well to unknown attacks. However, these approaches depend on handcrafted prior information to control the generation of intermediate difference maps and easily fall into local optima. In this paper, we propose a novel frame-level face anti-spoofing method, Covered Style Mining-GAN (CSM-GAN), which converts face anti-spoofing detection into a style transfer process without any prior information. Specifically, CSM-GAN has four main components: the Covered Style Encoder (CSE), responsible for mining the difference map containing the photography style and discriminative clues; the Auxiliary Style Classifier (ASC), consisting of several stacked Difference Capture Blocks (DCB) responsible for distinguishing bona fide faces from spoofing faces; and the Style Transfer Generator (STG) and Style Adversarial Discriminator (SAD), which form generative adversarial networks to achieve style transfer. Comprehensive experiments on several benchmark datasets show that the proposed method not only outperforms current state-of-the-art but also produces better visual diversity in difference maps. (C) 2022 Elsevier Ltd. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available