4.6 Article

Evaluation of adversarial attacks sensitivity of classifiers with occluded input data

Journal

NEURAL COMPUTING & APPLICATIONS
Volume 34, Issue 20, Pages 17615-17632

Publisher

SPRINGER LONDON LTD
DOI: 10.1007/s00521-022-07387-y

Keywords

Adversarial attacks; Adversarial learning; Adversarial robustness

Funding

  1. National Science Foundation [CHE-1905043, CNS-2136961]

Ask authors/readers for more resources

With the remarkable achievements of deep learning models, this paper introduces a Sensitivity-inspired Constrained Evaluation Method (SICEM) to determine the vulnerability of certain regions in the input space against adversarial attacks.
With the noteworthy achievements of deep learning models, there are transformative applications that aim at cost reduction and the improvement in human quality of life. Nevertheless, recent work aimed at testing a classifier's ability to withstand targeted and black-box adversarial attacks demonstrated that deep learning models, in particular, are brittle and lack certain robustness that makes them particularly weak, and ultimately leading to a lack of trust. For this specific area, a question arises concerning certain regions' sensitivity in the input space against adversarial perturbations for a classification model. This paper aims to study such a problem by looking into a Sensitivity-inspired Constrained Evaluation Method (SICEM) to deterministically evaluate how much a region of the input space is vulnerable to adversarial perturbations compared to other regions and also the entire input space. Our experiments suggest that SICEM can accurately quantify region vulnerabilities on MNIST and CIFAR-10 datasets.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available