4.6 Review

An Adversarial Perspective on Accuracy, Robustness, Fairness, and Privacy: Multilateral-Tradeoffs in Trustworthy ML

期刊

IEEE ACCESS
卷 10, 期 -, 页码 120850-120865

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/ACCESS.2022.3218715

关键词

Security; adversarial robustness; privacy; fairness; machine learning; causal models; causal representation; trustworthy machine learning; causal machine learning

向作者/读者索取更多资源

Model accuracy is a traditional metric in machine learning, but privacy, fairness, and robustness are becoming increasingly important. Trustworthy ML requires balancing these four aspects, and the defenses introduced for each aspect may have trade-offs. This paper reviews the state of the art in Trustworthy ML research and advocates for the use of causal modeling to achieve a unified approach to Trustworthy ML.
Model accuracy is the traditional metric employed in machine learning (ML) applications. However, privacy, fairness, and robustness guarantees are crucial as ML algorithms increasingly pervade our lives and play central roles in socially important systems. These four desiderata constitute the pillars of Trustworthy ML (TML) and may mutually inhibit or reinforce each other. It is necessary to understand and clearly delineate the trade-offs among these desiderata in the presence of adversarial attacks. However, threat models for the desiderata are different and the defenses introduced for each leads to further trade-offs in a multilateral adversarial setting (i.e., a setting attacking several pillars simultaneously). The first half of the paper reviews the state of the art in TML research, articulates known multilateral trade-offs, and identifies open problems and challenges in the presence of an adversary that may take advantage of such multilateral trade-offs. The fundamental shortcomings of statistical association-based TML are discussed, to motivate the use of causal methods to achieve TML. The second half of the paper, in turn, advocates the use of causal modeling in TML. Evidence is collected from across the literature that causal ML is well-suited to provide a unified approach to TML. Causal discovery and causal representation learning are introduced as essential stages of causal modeling, and a new threat model for causal ML is introduced to quantify the vulnerabilities introduced through the use of causal methods. The paper concludes with pointers to possible next steps in the development of a causal TML pipeline.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据