4.6 Article

Detecting Deepfake Voice Using Explainable Deep Learning Techniques

Journal

APPLIED SCIENCES-BASEL
Volume 12, Issue 8, Pages -

Publisher

MDPI
DOI: 10.3390/app12083926

Keywords

explainable artificial intelligence (XAI); deepfake detection; human-centered artificial intelligence

Funding

  1. Institute of Information & communications Technology Planning & Evaluation (IITP) - Korean government (MSIT) [2020-0-01373]
  2. Bio & Medical Technology Development Program of the National Research Foundation (NRF) - Korean government (MSIT) [NRF-2021M3E5D2A01021156]
  3. DGIST R&D program of the Ministry of Science and ICT of Korea [22-IT-10-03]

Ask authors/readers for more resources

This paper presents a human perception level interpretability method for deepfake audio detection and proposes a novel concept of providing fresh interpretation using attribution scores.
Fake media, generated by methods such as deepfakes, have become indistinguishable from real media, but their detection has not improved at the same pace. Furthermore, the absence of interpretability on deepfake detection models makes their reliability questionable. In this paper, we present a human perception level of interpretability for deepfake audio detection. Based on their characteristics, we implement several explainable artificial intelligence (XAI) methods used for image classification on an audio-related task. In addition, by examining the human cognitive process of XAI on image classification, we suggest the use of a corresponding data format for providing interpretability. Using this novel concept, a fresh interpretation using attribution scores can be provided.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available