4.6 Article

Man-in-the-Middle Attacks Against Machine Learning Classifiers Via Malicious Generative Models

期刊

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TDSC.2020.3021008

关键词

Deep neural network; adversarial example; security

向作者/读者索取更多资源

The paper explores vulnerabilities of DNN models under MitM attacks and finds that traditional adversarial example attacks are not applicable to MitM adversaries. By using generative models to craft adversarial examples on the fly, the attack difficulty can be mitigated and the success rate increased.
Deep Neural Networks (DNNs) are vulnerable to deliberately crafted adversarial examples. In the past few years, many efforts have been spent on exploring query-optimisation attacks to find adversarial examples of either black-box or white-box DNN models, as well as the defending countermeasures against those attacks. In this article, we explore vulnerabilities of DNN models under the umbrella of Man-in-the-Middle (MitM) attacks, which have not been investigated before. From the perspective of an MitM adversary, the aforementioned adversarial example attacks are not viable anymore. First, such attacks must acquire the outputs from the models multiple times before actually launching attacks, which is difficult for the MitM adversary in practice. Second, such attacks are one-off and cannot be directly generalised onto new data examples, which decreases the rate of return for the attacker. In contrast, using generative models to craft adversarial examples on the fly can mitigate the drawbacks. However, the adversarial capability of the generative models, such as Variational Auto-Encoder (VAE), has not been extensively studied. Therefore, given a classifier, we investigate using a VAE decoder to either transform benign inputs to their adversarial counterparts or decode outputs from benign VAE encoders to be adversarial examples. The proposed method can endue more capability to MitM attackers. Based on our evaluation, the proposed attack can achieve above 95 percent success rates on both MNIST and CIFAR10 datasets, which is better or comparable with state-of-the-art query-optimisation attacks. In the meantime, the attack is $10 boolean AND 4$104 times faster than the query-optimisation attacks.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据