4.6 Review

An Adversarial Perspective on Accuracy, Robustness, Fairness, and Privacy: Multilateral-Tradeoffs in Trustworthy ML

相关参考文献

注意:仅列出部分参考文献,下载原文获取全部文献信息。
Article Computer Science, Artificial Intelligence

Balancing Learning Model Privacy, Fairness, and Accuracy With Early Stopping Criteria

Tao Zhang et al.

Summary: As deep learning models mature, finding the ideal tradeoff between accuracy, fairness, and privacy is critical. Privacy and fairness can affect the accuracy of models, so balancing these needs is important. By implementing differentially private stochastic gradient descent (DP-SGD) in deep neural network models, privacy and fairness can be indirectly managed. The number of training epochs plays a central role in striking a balance between accuracy, fairness, and privacy. Based on this observation, two early stopping criteria are designed to help analysts achieve their ideal tradeoff.

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS (2023)

Article Multidisciplinary Sciences

Peeking into a black box, the fairness and generalizability of a MIMIC-III benchmarking model

Eliane Roosli et al.

Summary: As artificial intelligence continues to improve the quality of care for some patients, there is a risk of reinforcing health disparities faced by minority populations. This study highlights the need for empirical evaluation studies and the use of fairness and performance assessment frameworks to address bias and fairness concerns in risk prediction models.

SCIENTIFIC DATA (2022)

Article Computer Science, Artificial Intelligence

A survey on datasets for fairness-aware machine learning

Tai Le Quy et al.

Summary: As decision-making increasingly relies on machine learning and big data, fairness in data-driven artificial intelligence systems is receiving more attention. This paper provides an overview of real-world datasets used for fairness-aware machine learning and analyzes the relationships between different attributes, particularly those related to fairness.

WILEY INTERDISCIPLINARY REVIEWS-DATA MINING AND KNOWLEDGE DISCOVERY (2022)

Review Computer Science, Information Systems

The Causal Fairness Field Guide: Perspectives From Social and Formal Sciences

Alycia N. Carey et al.

Summary: This article introduces different methods to measure the causal fairness of machine learning models, highlighting the lack of literature that combines causality-based fairness notions with social sciences such as philosophy, sociology, and law. The authors aim to bridge this gap by discussing the thoughts and discussions of causality-based fairness notions produced by both social and formal sciences, providing a deeper understanding of the alignment between these notions and important humanistic values. The article also explores criticisms of current approaches to causality-based fair machine learning from sociological and technical perspectives, ultimately aiming to improve methods and metrics to serve oppressed and marginalized populations better.

FRONTIERS IN BIG DATA (2022)

Article Engineering, Electrical & Electronic

Toward Causal Representation Learning

Bernhard Schoelkopf et al.

Summary: The fields of machine learning and graphical causality have started to influence each other and show interest in benefiting from each other's advancements. Understanding fundamental concepts of causal inference, and relating them to key issues in machine learning, can help enhance modern machine learning research. A central problem in the intersection of AI and causality is the learning of causal representations, which involves discovering high-level causal variables from low-level observations.

PROCEEDINGS OF THE IEEE (2021)

Article Computer Science, Theory & Methods

A Survey on Bias and Fairness in Machine Learning

Ninareh Mehrabi et al.

Summary: With the widespread use of AI systems in everyday life, fairness in design has become crucial. Researchers have developed methods to address biases in different subdomains and established a taxonomy for fairness definitions. Existing work shows biases in AI applications, and researchers are working on solutions to mitigate these problems.

ACM COMPUTING SURVEYS (2021)

Proceedings Paper Computer Science, Information Systems

DeSMP: Differential Privacy-exploited Stealthy Model Poisoning Attacks in Federated Learning

Md Tamjid Hossain et al.

Summary: The paper analyzes the adversarial learning process in an FL setting, illustrating a method to conduct stealthy and persistent model poisoning attacks using differential noise. The empirical analysis demonstrates the effectiveness of this attack. Additionally, a novel reinforcement learning-based defense strategy is proposed to counter model poisoning attacks.

2021 17TH INTERNATIONAL CONFERENCE ON MOBILITY, SENSING AND NETWORKING (MSN 2021) (2021)

Proceedings Paper Computer Science, Information Systems

On the Privacy Risks of Algorithmic Fairness

Hongyan Chang et al.

Summary: Algorithmic fairness and privacy are crucial pillars of trustworthy machine learning, but pursuing fairness may lead to privacy risks. Research shows that achieving fairness comes at the cost of privacy, especially for unprivileged subgroups.

2021 IEEE EUROPEAN SYMPOSIUM ON SECURITY AND PRIVACY (EUROS&P 2021) (2021)

Proceedings Paper Computer Science, Artificial Intelligence

Analysis and Applications of Class-wise Robustness in Adversarial Training

Qi Tian et al.

Summary: This paper analyzes the class-wise robustness in adversarial training, revealing significant differences in robustness among classes. A new attack method called Temperature-PGD attack is proposed to increase attack effectiveness, and modifications in training and inference phases are made to improve the robustness of the most vulnerable class and reduce the disparities in class-wise robustness.

KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING (2021)

Article Social Sciences, Mathematical Methods

Fairness in Criminal Justice Risk Assessments: The State of the Art

Richard Berk et al.

Summary: This article clarifies the trade-offs between accuracy and fairness in criminal justice risk assessments, highlighting at least six kinds of fairness which may be incompatible with each other and with accuracy. The differences in base rates across legally protected groups present a major complication in practice, requiring consideration of challenging trade-offs.

SOCIOLOGICAL METHODS & RESEARCH (2021)

Article Computer Science, Theory & Methods

Adversarial data poisoning attacks against the PC learning algorithm

Emad Alsuwat et al.

INTERNATIONAL JOURNAL OF GENERAL SYSTEMS (2020)

Review Computer Science, Artificial Intelligence

A Survey on Differentially Private Machine Learning [Review Article]

Maoguo Gong et al.

IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE (2020)

Proceedings Paper Computer Science, Information Systems

Adversarial Classification Under Differential Privacy

Jairo Giraldo et al.

27TH ANNUAL NETWORK AND DISTRIBUTED SYSTEM SECURITY SYMPOSIUM (NDSS 2020) (2020)

Article Computer Science, Information Systems

Cybersecurity in the Era of Data Science: Examining New Adversarial Models

Bulent Yener et al.

IEEE SECURITY & PRIVACY (2019)

Proceedings Paper Computer Science, Theory & Methods

A comparative study of fairness-enhancing interventions in machine learning

Sorelle A. Friedler et al.

FAT*'19: PROCEEDINGS OF THE 2019 CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY (2019)

Proceedings Paper Computer Science, Information Systems

Privacy Risks of Securing Machine Learning Models against Adversarial Examples

Liwei Song et al.

PROCEEDINGS OF THE 2019 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (CCS'19) (2019)

Proceedings Paper Computer Science, Information Systems

IARPA Janus Benchmark - C: Face Dataset and Protocol

Brianna Maze et al.

2018 INTERNATIONAL CONFERENCE ON BIOMETRICS (ICB) (2018)

Proceedings Paper Computer Science, Software Engineering

Fairness Definitions Explained

Sahil Verma et al.

2018 IEEE/ACM INTERNATIONAL WORKSHOP ON SOFTWARE FAIRNESS (FAIRWARE 2018) (2018)

Proceedings Paper Computer Science, Theory & Methods

Defensive Dropout for Hardening Deep Neural Networks under Adversarial Attacks

Siyue Wang et al.

2018 IEEE/ACM INTERNATIONAL CONFERENCE ON COMPUTER-AIDED DESIGN (ICCAD) DIGEST OF TECHNICAL PAPERS (2018)

Proceedings Paper Computer Science, Theory & Methods

Differential Privacy Preserving Causal Graph Discovery

Depeng Xu et al.

2017 1ST IEEE SYMPOSIUM ON PRIVACY-AWARE COMPUTING (PAC) (2017)

Proceedings Paper Computer Science, Theory & Methods

On Subnormal Floating Point and Abnormal Timing

Marc Andrysco et al.

2015 IEEE SYMPOSIUM ON SECURITY AND PRIVACY SP 2015 (2015)

Proceedings Paper Computer Science, Theory & Methods

Private Empirical Risk Minimization: Efficient Algorithms and Tight Error Bounds

Raef Bassily et al.

2014 55TH ANNUAL IEEE SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE (FOCS 2014) (2014)

Article Computer Science, Theory & Methods

The Algorithmic Foundations of Differential Privacy

Cynthia Dwork et al.

FOUNDATIONS AND TRENDS IN THEORETICAL COMPUTER SCIENCE (2013)

Article Computer Science, Artificial Intelligence

Robustness and generalization

Huan Xu et al.

MACHINE LEARNING (2012)