4.5 Article

Revisiting model's uncertainty and confidences for adversarial example detection

期刊

APPLIED INTELLIGENCE
卷 53, 期 1, 页码 509-531

出版社

SPRINGER
DOI: 10.1007/s10489-022-03373-y

关键词

Adversarial examples; Adversarial attacks; Adversarial example detection; Deep learning robustness

向作者/读者索取更多资源

Deep Neural Networks (DNNs) used in security-sensitive applications are vulnerable to adversarial examples. Existing defense and detection techniques have limited success against various types of attacks. In this study, a novel unsupervised ensemble AE detection mechanism called SFAD is proposed, which utilizes model's uncertainty and processes model layers outputs to improve the detection performance against adversarial examples.
Security-sensitive applications that rely on Deep Neural Networks (DNNs) are vulnerable to small perturbations that are crafted to generate Adversarial Examples. The (AEs) are imperceptible to humans and cause DNN to misclassify them. Many defense and detection techniques have been proposed. Model's confidences and Dropout, as a popular way to estimate the model's uncertainty, have been used for AE detection but they showed limited success against black- and gray-box attacks. Moreover, the state-of-the-art detection techniques have been designed for specific attacks or broken by others, need knowledge about the attacks, are not consistent, increase model parameters overhead, are time-consuming, or have latency in inference time. To trade off these factors, we revisit the model's uncertainty and confidences and propose a novel unsupervised ensemble AE detection mechanism that 1) uses the uncertainty method called SelectiveNet, 2) processes model layers outputs, i.e. feature maps, to generate new confidence probabilities. The detection method is called SFAD. Experimental results show that the proposed approach achieves better performance against black- and gray-box attacks than the state-of-the-art methods and achieves comparable performance against white-box attacks. Moreover, results show that SFAD is fully robust against High Confidence Attacks (HCAs) for MNIST and partially robust for CIFAR10 datasets.(1)

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据