4.7 Article

Balancing Learning Model Privacy, Fairness, and Accuracy With Early Stopping Criteria

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNNLS.2021.3129592

关键词

Training; Privacy; Deep learning; Costs; Analytical models; Stability criteria; Stochastic processes; Deep learning; differential privacy (DP); early stopping criteria; machine learning fairness; stochastic gradient descent

向作者/读者索取更多资源

As deep learning models mature, finding the ideal tradeoff between accuracy, fairness, and privacy is critical. Privacy and fairness can affect the accuracy of models, so balancing these needs is important. By implementing differentially private stochastic gradient descent (DP-SGD) in deep neural network models, privacy and fairness can be indirectly managed. The number of training epochs plays a central role in striking a balance between accuracy, fairness, and privacy. Based on this observation, two early stopping criteria are designed to help analysts achieve their ideal tradeoff.
As deep learning models mature, one of the most prescient questions we face is: what is the ideal tradeoff between accuracy, fairness, and privacy (AFP)? Unfortunately, both the privacy and the fairness of a model come at the cost of its accuracy. Hence, an efficient and effective means of fine-tuning the balance between this trinity of needs is critical. Motivated by some curious observations in privacy-accuracy tradeoffs with differentially private stochastic gradient descent (DP-SGD), where fair models sometimes result, we conjecture that fairness might be better managed as an indirect byproduct of this process. Hence, we conduct a series of analyses, both theoretical and empirical, on the impacts of implementing DP-SGD in deep neural network models through gradient clipping and noise addition. The results show that, in deep learning, the number of training epochs is central to striking a balance between AFP because DP-SGD makes the training less stable, providing the possibility of model updates at a low discrimination level without much loss in accuracy. Based on this observation, we designed two different early stopping criteria to help analysts choose the optimal epoch at which to stop training a model so as to achieve their ideal tradeoff. Extensive experiments show that our methods can achieve an ideal balance between AFP.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据