4.7 Article

Kernel dependence regularizers and Gaussian processes with applications to algorithmic fairness

期刊

PATTERN RECOGNITION
卷 132, 期 -, 页码 -

出版社

ELSEVIER SCI LTD
DOI: 10.1016/j.patcog.2022.108922

关键词

Fairness; Kernel methods; Gaussian processes; Regularization; Hilbert-Schmidt independence criterion

资金

  1. European Research Council (ERC) under the ERC [647423]
  2. Alan Turing Institute [EP/N510129/1]
  3. European Research Council (ERC) [647423] Funding Source: European Research Council (ERC)

向作者/读者索取更多资源

The current use of machine learning in industrial, societal, and economical activities has raised concerns about the fairness, equity, and ethics of automated decisions. This study presents a regularization approach that balances predictive accuracy with fairness in terms of statistical parity, aiming to address biases in machine learning models.
Current adoption of machine learning in industrial, societal and economical activities has raised concerns about the fairness, equity and ethics of automated decisions. Predictive models are often developed us-ing biased datasets and thus retain or even exacerbate biases in their decisions and recommendations. Removing the sensitive covariates, such as gender or race, is insufficient to remedy this issue since the biases may be retained due to other related covariates. We present a regularization approach to this problem that trades off predictive accuracy of the learned models (with respect to biased labels) for the fairness in terms of statistical parity, i.e. independence of the decisions from the sensitive covariates. In particular, we consider a general framework of regularized empirical risk minimization over reproducing kernel Hilbert spaces and impose an additional regularizer of dependence between predictors and sen-sitive covariates using kernel-based measures of dependence, namely the Hilbert-Schmidt Independence Criterion (HSIC) and its normalized version. This approach leads to a closed-form solution in the case of squared loss, i.e. ridge regression. We also provide statistical consistency results for both risk and fair-ness bound for our approach. Moreover, we show that the dependence regularizer has an interpretation as modifying the corresponding Gaussian process (GP) prior. As a consequence, a GP model with a prior that encourages fairness to sensitive variables can be derived, allowing principled hyperparameter selec-tion and studying of the relative relevance of covariates under fairness constraints. Experimental results in synthetic examples and in real problems of income and crime prediction illustrate the potential of the approach to improve fairness of automated decisions.(c) 2022 Published by Elsevier Ltd.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据