Journal
NEUROCOMPUTING
Volume 565, Issue -, Pages -Publisher
ELSEVIER
DOI: 10.1016/j.neucom.2023.126872
Keywords
Deepfake detection; High-compressed video; Temporal inconsistency; Spatial-frequency cues; Contrastive distillation
Categories
Ask authors/readers for more resources
In this work, the authors propose a Contrastive Spatio-Temporal Distilling (CSTD) approach to improve the detection of high-compressed deepfake videos. The approach leverages spatial-frequency cues and temporal-contrastive alignment to fully exploit spatiotemporal inconsistency information.
Deepfake detection in high-resolution videos has made significant progress in recent years, but detecting high-compressed deepfake videos remains challenging due to the low quality of synthesized videos. Existing video-level approaches fail to fully exploit the spatiotemporal inconsistencies in low-quality high-compressed deepfake videos, leading to poor generalization and robustness. In this work, we propose a Contrastive Spatio-Temporal Distilling (CSTD) approach that leverages spatial-frequency cues and temporal-contrastive alignment to improve high-compressed deepfake video detection. Our approach employs a two-stage spatiotemporal video encoder to fully exploit spatiotemporal inconsistency information. A fine-grained spatial-frequency distillation module is used to retrieve invariant forgery cues in spatial and frequency domains in high-compressed deepfake videos. Additionally, a mutual-information temporal-contrastive distillation module is introduced to enhance the temporal correlated information and transfer the temporal structural knowledge from the teacher model to the student model. We demonstrate the effectiveness and robustness of our method on low-quality high-compressed deepfake videos on public benchmarks against state-of-the-art competitors using extensive experiments and visualizations.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available