4.7 Article

Assessing Algorithmic Fairness with Unobserved Protected Class Using Data Combination

期刊

MANAGEMENT SCIENCE
卷 68, 期 3, 页码 1959-1981

出版社

INFORMS
DOI: 10.1287/mnsc.2020.3850

关键词

disparate impact and algorithmic bias; partial identification; proxy variables; fractional optimization; Bayesian Improved Surname Geocoding

资金

  1. National Science Foundation [1939704]
  2. Direct For Computer & Info Scie & Enginr
  3. Div Of Information & Intelligent Systems [1939704] Funding Source: National Science Foundation

向作者/读者索取更多资源

With the increasing impact of algorithmic decisions on people's lives, it is important to examine their fairness and the potential disparate impacts they may have on different groups. This paper focuses on the challenge of assessing disparate impacts in practice when the membership of protected classes is often unobserved in the data. The study proposes the use of an auxiliary data set, such as the U.S. census, to predict the protected class using proxy variables like surname and geolocation. The research provides exact characterizations of the possible true disparities consistent with the data, and offers optimization-based algorithms and statistical tools for computing and visualizing these disparities, as well as assessing sampling uncertainty.
The increasing impact of algorithmic decisions on people's lives compels us to scrutinize their fairness and, in particular, the disparate impacts that ostensibly color-blind algorithms can have on different groups. Examples include credit decisioning, hiring, advertising, criminal justice, personalized medicine, and targeted policy making, where in some cases legislative or regulatory frameworks for fairness exist and define specific protected classes. In this paper we study a fundamental challenge to assessing disparate impacts in practice: protected class membership is often not observed in the data. This is particularly a problem in lending and healthcare. We consider the use of an auxiliary data set, such as the U.S. census, to construct models that predict the protected class from proxy variables, such as surname and geolocation. We show that even with such data, a variety of common disparity measures are generally unidentifiable, providing a new perspective on the documented biases of popular proxy-based methods. We provide exact characterizations of the tightest possible set of all possible true disparities that are consistent with the data (and possibly additional assumptions). We further provide optimization-based algorithms for computing and visualizing these sets and statistical tools to assess sampling uncertainty. Together, these enable reliable and robust assessments of disparities-an important tool when disparity assessment can have far-reaching policy implications. We demonstrate this in two case studies with real data: mortgage lending and personalized medicine dosing.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据