4.7 Article

Game Theoretical Adversarial Deep Learning With Variational Adversaries

期刊

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TKDE.2020.2972320

关键词

Games; Nash equilibrium; Training; Computational modeling; Optimization; Neural networks; Perturbation methods; Adversarial learning; variational autoencoders; convolutional neural networks; game theory; nash equilibrium

向作者/读者索取更多资源

In this research, a game theoretical learning model is proposed to find optimal adversarial manipulations to mislead Convolutional Neural Network (CNN). The optimization procedure involves solving for the Nash equilibrium of the Stackelberg game to manipulate the CNN classifier's payoff functions for defending attacks from malicious adversaries.
A critical challenge in machine learning is the vulnerability of learning models in defending attacks from malicious adversaries. In this research, we propose game theoretical learning between a variational adversary and a Convolutional Neural Network (CNN), participating in a variable-sum two-player sequential Stackelberg game. Our adversary manipulates the input data distribution to make the CNN misclassify the manipulated data. Our ideal adversarial manipulation is a minimum change to the data which yet is large enough to mislead the CNNs. We propose an optimization procedure to find optimal adversarial manipulations by solving for the Nash equilibrium of the Stackelberg game. Specifically, the adversary's payoff function depends on the data manipulation which is determined by a Variational Autoencoder, while the CNN classifier's payoff functions are evaluated by misclassification errors. The optimization of our adversarial manipulations is defined by Alternating Least Squares and Simulated Annealing. Experimental results demonstrate that our game-theoretic manipulations are able to mislead CNNs that are well trained on the original data as well as on data generated by other models. We then let the CNNs to incorporate our manipulated data which leads to secure classifiers that are empirically the most robust in defending various types of adversarial attacks.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据