4.6 Article

Evaluation of adversarial attacks sensitivity of classifiers with occluded input data

期刊

NEURAL COMPUTING & APPLICATIONS
卷 34, 期 20, 页码 17615-17632

出版社

SPRINGER LONDON LTD
DOI: 10.1007/s00521-022-07387-y

关键词

Adversarial attacks; Adversarial learning; Adversarial robustness

资金

  1. National Science Foundation [CHE-1905043, CNS-2136961]

向作者/读者索取更多资源

With the remarkable achievements of deep learning models, this paper introduces a Sensitivity-inspired Constrained Evaluation Method (SICEM) to determine the vulnerability of certain regions in the input space against adversarial attacks.
With the noteworthy achievements of deep learning models, there are transformative applications that aim at cost reduction and the improvement in human quality of life. Nevertheless, recent work aimed at testing a classifier's ability to withstand targeted and black-box adversarial attacks demonstrated that deep learning models, in particular, are brittle and lack certain robustness that makes them particularly weak, and ultimately leading to a lack of trust. For this specific area, a question arises concerning certain regions' sensitivity in the input space against adversarial perturbations for a classification model. This paper aims to study such a problem by looking into a Sensitivity-inspired Constrained Evaluation Method (SICEM) to deterministically evaluate how much a region of the input space is vulnerable to adversarial perturbations compared to other regions and also the entire input space. Our experiments suggest that SICEM can accurately quantify region vulnerabilities on MNIST and CIFAR-10 datasets.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据