4.7 Article

CGN: Class gradient network for the construction of adversarial samples

期刊

INFORMATION SCIENCES
卷 654, 期 -, 页码 -

出版社

ELSEVIER SCIENCE INC
DOI: 10.1016/j.ins.2023.119855

关键词

Adversarial samples; Class gradient matrix; Generator; Transferability

向作者/读者索取更多资源

This paper proposes a method based on class gradient networks for generating high-quality adversarial samples. By introducing a high-level class gradient matrix and combining classification loss and perturbation loss, the method demonstrates superiority in the transferability of adversarial samples on targeted attacks.
Deep neural networks (DNNs) have tremendously succeeded in several computer vision-related fields. Nevertheless, previous research demonstrates that DNNs are vulnerable to adversarial sample attacks. Attackers add carefully designed perturbation noise to clean samples to form adversarial samples, which may lead to errors in the DNNs' predictions. Consequently, the safety of deep learning has attracted much attention, and researchers have commenced exploring adversarial samples from different perspectives. In this paper, a method based on class gradient networks (CGN) is proposed, which can generate high-quality adversarial samples by designing multiple objective functions. Specifically, the adversarial sample's high-level features are guided to change by introducing a high-level class gradient matrix, and the classification loss and perturbation loss are combined to jointly train a generator to fit the distribution of adversarial noises. We conducted experiments on two standard datasets, Fashion-MNIST and CIFAR-10. The results demonstrate the superiority of our method in the transferability of adversarial samples on targeted attacks and indicate the approach outperforms the baseline method.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据