4.5 Article

Achieving fairness with a simple ridge penalty

期刊

STATISTICS AND COMPUTING
卷 32, 期 5, 页码 -

出版社

SPRINGER
DOI: 10.1007/s11222-022-10143-w

关键词

Linear regression; Logistic regression; Generalised linear models; Fairness; Ridge regression

资金

  1. SUPSI -University of Applied Sciences and Arts of Southern Switzerland
  2. UBS-IDSIA
  3. EPSRC
  4. MRC Centre for Doctoral Training in Statistical Science, University of Oxford [EP/L016710/1]

向作者/读者索取更多资源

This paper presents a general framework for estimating regression models with a user-defined level of fairness. Fairness is enforced through model selection, where a ridge penalty is chosen to control the impact of sensitive attributes. The proposed framework is mathematically simple and can be extended to various types of models and fairness definitions. Empirical evaluations show that the proposed framework outperforms other models in terms of goodness of fit and predictive accuracy at the same level of fairness.
In this paper, we present a general framework for estimating regression models subject to a user-defined level of fairness. We enforce fairness as a model selection step in which we choose the value of a ridge penalty to control the effect of sensitive attributes. We then estimate the parameters of the model conditional on the chosen penalty value. Our proposal is mathematically simple, with a solution that is partly in closed form and produces estimates of the regression coefficients that are intuitive to interpret as a function of the level of fairness. Furthermore, it is easily extended to generalised linear models, kernelised regression models and other penalties, and it can accommodate multiple definitions of fairness. We compare our approach with the regression model from Komiyama et al. (in: Proceedings of machine learning research. 35th international conference on machine learning (ICML), vol 80, pp 2737-2746, 2018), which implements a provably optimal linear regression model and with the fair models from Zafar et al. (J Mach Learn Res 20:1-42, 2019). We evaluate these approaches empirically on six different data sets, and we find that our proposal provides better goodness of fit and better predictive accuracy for the same level of fairness. In addition, we highlight a source of bias in the original experimental evaluation in Komiyama et al. (in: Proceedings of machine learning research. 35th international conference on machine learning (ICML), vol 80, pp 2737-2746, 2018).

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据