4.7 Article

Adversarial Analysis for Source Camera Identification

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCSVT.2020.3047084

关键词

Cameras; Feature extraction; Forensics; Task analysis; Perturbation methods; Neural networks; Training; Adversarial attacks; fingerprint copy-move attack; joint feature-based auto-learning attack; relation mismatch

资金

  1. National Natural Science Foundation of China [U1936117, U1736119, 61972395, 61772111]
  2. Beijing Natural Science Foundation [4192058]

向作者/读者索取更多资源

Recent studies have shown the vulnerability of convolutional neural networks to adversarial attacks, casting doubt on the reliability of forensic methods. The introduction of powerful attacks and advanced defense mechanisms has demonstrated the effectiveness of fingerprint-based attacks.
Recent studies highlight the vulnerability of convolutional neural networks (CNNs) to adversarial attacks, which also calls into question the reliability of forensic methods. Existing adversarial attacks generate one-to-one noise, which means these methods have not learned the fingerprint information. Therefore, we introduce two powerful attacks, fingerprint copy-move attack, and joint feature-based auto-learning attack. To validate the performance of attack methods, we move a step ahead and introduce the higher possible defense mechanism relation mismatch. which expands the characterization differences of classifiers in the same classification network. Extensive experiments show that relation mismatch is superior in recognizing adversarial examples and prove that the proposed fingerprint-based attacks are more powerful. Both proposed attacks show excellent attack transferability to unknown samples. The Pytorch (R) implementations of these methods can download from an open-source GitHub project https://github.com/Dlut-lab-zmn/Source-attack.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据