4.7 Article

Adversarial vulnerability bounds for Gaussian process classification

期刊

MACHINE LEARNING
卷 112, 期 3, 页码 971-1009

出版社

SPRINGER
DOI: 10.1007/s10994-022-06224-6

关键词

Machine learning; Gaussian process; Adversarial example; Bound; Classification; Gaussian process classification

向作者/读者索取更多资源

Protecting ML classifiers from adversarial examples is crucial. In this paper, we propose an adversarial bound that quantifies the risk of future adversarial attacks by bounding the potential of confident misclassification. We test this bound using multiple datasets and demonstrate its effectiveness in producing relevant and practical results.
Protecting ML classifiers from adversarial examples is crucial. We propose that the main threat is an attacker perturbing a confidently classified input to produce a confident misclassification. We consider in this paper the L-0 attack in which a small number of inputs can be perturbed by the attacker at test-time. To quantify the risk of this form of attack we have devised a formal guarantee in the form of an adversarial bound (AB) for a binary, Gaussian process classifier using the EQ kernel. This bound holds for the entire input domain, bounding the potential of any future adversarial attack to cause a confident misclassification. We explore how to extend to other kernels and investigate how to maximise the bound by altering the classifier (for example by using sparse approximations). We test the bound using a variety of datasets and show that it produces relevant and practical bounds for many of them.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据