4.5 Article

Privacy, accuracy, and model fairness trade-offs in federated learning

Journal

COMPUTERS & SECURITY
Volume 122, Issue -, Pages -

Publisher

ELSEVIER ADVANCED TECHNOLOGY
DOI: 10.1016/j.cose.2022.102907

Keywords

Federated learning; Differential privacy; Discrimination; Fairness; Privacy preservation; Machine learning

Funding

  1. National Natural Science Foundation of China [61972366]
  2. Cloud Technology Endowed Professorship

Ask authors/readers for more resources

This paper introduces a federated learning training model that balances privacy, accuracy, and model fairness using differential privacy (DP). It discusses the fairness and privacy effect of local DP and global DP in federated learning and proposes a fair and privacy quantification mechanism. The experiments demonstrate the positive effect of DP on fairness.
A B S T R A C T As applications of machine learning become increasingly widespread, the need to ensure model accu-racy and fairness while protecting the privacy of user data becomes more pronounced. On this note, this paper introduces a federated learning training model, which allow clients to simultaneously learn their model and update the associated parameters on a centralized server. In our approach, we seek to achieve an acceptable trade-off between privacy, accuracy, and model fairness by using differential privacy (DP), which also helps to minimize privacy risks by protecting the presence of a specific sample in the training data. Machine learning models can, however, exhibit unintended behaviors, such as unfairness, which re-sult in groups with certain sensitive characteristics (e.g., gender) receiving different patterns of outcomes. Hence, we discuss the fairness and privacy effect of local DP and global DP when applied to federated learning by designing a fair and privacy quantification mechanism. In doing so, we can achieve an ac-ceptable trade-off between accuracy, privacy, and model fairness. We quantify the level of fairness based on the constraints of three definitions of fairness, including demographic parity, equal odds, and equality of opportunity. Finally, findings from our extensive experiments conducted on three real-world datasets with class imbalance demonstrate the positive effect of local and global DP on fairness. Our study also shows that privacy can come at the cost of fairness, as stricter privacy can intensify discrimination. Hence, we posit that careful parameter selection can potentially help achieve a more effective trade-off between utility, bias, and privacy.(c) 2022 Elsevier Ltd. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available