4.7 Article

Assessing Algorithmic Fairness with Unobserved Protected Class Using Data Combination

Journal

MANAGEMENT SCIENCE
Volume 68, Issue 3, Pages 1959-1981

Publisher

INFORMS
DOI: 10.1287/mnsc.2020.3850

Keywords

disparate impact and algorithmic bias; partial identification; proxy variables; fractional optimization; Bayesian Improved Surname Geocoding

Funding

  1. National Science Foundation [1939704]
  2. Direct For Computer & Info Scie & Enginr
  3. Div Of Information & Intelligent Systems [1939704] Funding Source: National Science Foundation

Ask authors/readers for more resources

With the increasing impact of algorithmic decisions on people's lives, it is important to examine their fairness and the potential disparate impacts they may have on different groups. This paper focuses on the challenge of assessing disparate impacts in practice when the membership of protected classes is often unobserved in the data. The study proposes the use of an auxiliary data set, such as the U.S. census, to predict the protected class using proxy variables like surname and geolocation. The research provides exact characterizations of the possible true disparities consistent with the data, and offers optimization-based algorithms and statistical tools for computing and visualizing these disparities, as well as assessing sampling uncertainty.
The increasing impact of algorithmic decisions on people's lives compels us to scrutinize their fairness and, in particular, the disparate impacts that ostensibly color-blind algorithms can have on different groups. Examples include credit decisioning, hiring, advertising, criminal justice, personalized medicine, and targeted policy making, where in some cases legislative or regulatory frameworks for fairness exist and define specific protected classes. In this paper we study a fundamental challenge to assessing disparate impacts in practice: protected class membership is often not observed in the data. This is particularly a problem in lending and healthcare. We consider the use of an auxiliary data set, such as the U.S. census, to construct models that predict the protected class from proxy variables, such as surname and geolocation. We show that even with such data, a variety of common disparity measures are generally unidentifiable, providing a new perspective on the documented biases of popular proxy-based methods. We provide exact characterizations of the tightest possible set of all possible true disparities that are consistent with the data (and possibly additional assumptions). We further provide optimization-based algorithms for computing and visualizing these sets and statistical tools to assess sampling uncertainty. Together, these enable reliable and robust assessments of disparities-an important tool when disparity assessment can have far-reaching policy implications. We demonstrate this in two case studies with real data: mortgage lending and personalized medicine dosing.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available