4.7 Article

Kernel dependence regularizers and Gaussian processes with applications to algorithmic fairness

Journal

PATTERN RECOGNITION
Volume 132, Issue -, Pages -

Publisher

ELSEVIER SCI LTD
DOI: 10.1016/j.patcog.2022.108922

Keywords

Fairness; Kernel methods; Gaussian processes; Regularization; Hilbert-Schmidt independence criterion

Funding

  1. European Research Council (ERC) under the ERC [647423]
  2. Alan Turing Institute [EP/N510129/1]
  3. European Research Council (ERC) [647423] Funding Source: European Research Council (ERC)

Ask authors/readers for more resources

The current use of machine learning in industrial, societal, and economical activities has raised concerns about the fairness, equity, and ethics of automated decisions. This study presents a regularization approach that balances predictive accuracy with fairness in terms of statistical parity, aiming to address biases in machine learning models.
Current adoption of machine learning in industrial, societal and economical activities has raised concerns about the fairness, equity and ethics of automated decisions. Predictive models are often developed us-ing biased datasets and thus retain or even exacerbate biases in their decisions and recommendations. Removing the sensitive covariates, such as gender or race, is insufficient to remedy this issue since the biases may be retained due to other related covariates. We present a regularization approach to this problem that trades off predictive accuracy of the learned models (with respect to biased labels) for the fairness in terms of statistical parity, i.e. independence of the decisions from the sensitive covariates. In particular, we consider a general framework of regularized empirical risk minimization over reproducing kernel Hilbert spaces and impose an additional regularizer of dependence between predictors and sen-sitive covariates using kernel-based measures of dependence, namely the Hilbert-Schmidt Independence Criterion (HSIC) and its normalized version. This approach leads to a closed-form solution in the case of squared loss, i.e. ridge regression. We also provide statistical consistency results for both risk and fair-ness bound for our approach. Moreover, we show that the dependence regularizer has an interpretation as modifying the corresponding Gaussian process (GP) prior. As a consequence, a GP model with a prior that encourages fairness to sensitive variables can be derived, allowing principled hyperparameter selec-tion and studying of the relative relevance of covariates under fairness constraints. Experimental results in synthetic examples and in real problems of income and crime prediction illustrate the potential of the approach to improve fairness of automated decisions.(c) 2022 Published by Elsevier Ltd.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available