4.6 Article

Proving Data-Poisoning Robustness in Decision Trees

期刊

COMMUNICATIONS OF THE ACM
卷 66, 期 2, 页码 105-113

出版社

ASSOC COMPUTING MACHINERY
DOI: 10.1145/3576894

关键词

-

向作者/读者索取更多资源

This paper investigates the brittleness of machine learning models, where small changes in training data could lead to different predictions. It proposes a verification technique based on abstract interpretation for decision tree models, and develops a tool called Antidote. Antidote abstractly trains decision trees for various possible poisoned datasets. With the soundness of the abstraction, Antidote can provide proofs that the prediction would stay the same regardless of tampering with the training data, for a given input. The effectiveness of Antidote is demonstrated on several popular datasets.
Machine learning models are brittle, and small changes in the training data can result in different predictions. We study the problem of proving that a prediction is robust to data poisoning, where an attacker can inject a number of malicious elements into the training set to influence the learned model. We target decision tree models, a popular and simple class of machine learning models that underlies many complex learning techniques. We present a sound verification technique based on abstract interpretation and implement it in a tool called Antidote. Antidote abstractly trains decision trees for an intractably large space of possible poisoned datasets. Due to the soundness of our abstraction, Antidote can produce proofs that, for a given input, the corresponding prediction would not have changed had the training set been tampered with or not. We demonstrate the effectiveness of Antidote on a number of popular datasets.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据