3.8 Proceedings Paper

Turning a Blind Eye: Explicit Removal of Biases and Variation from Deep Neural Network Embeddings

Journal

COMPUTER VISION - ECCV 2018 WORKSHOPS, PT I
Volume 11129, Issue -, Pages 556-572

Publisher

SPRINGER INTERNATIONAL PUBLISHING AG
DOI: 10.1007/978-3-030-11009-3_34

Keywords

Dataset bias; Face attribute classification; Ancestral origin dataset

Funding

  1. EPSRC [EP/G036861/1, Seebibyte EP/M013774/1]
  2. MRC [MR/M014568/1]
  3. EPSRC [EP/M013774/1] Funding Source: UKRI
  4. MRC [MR/M014568/1] Funding Source: UKRI

Ask authors/readers for more resources

Neural networks achieve the state-of-the-art in image classification tasks. However, they can encode spurious variations or biases that may be present in the training data. For example, training an age predictor on a dataset that is not balanced for gender can lead to gender biased predicitons (e.g. wrongly predicting that males are older if only elderly males are in the training set). We present two distinct contributions: (1) An algorithm that can remove multiple sources of variation from the feature representation of a network. We demonstrate that this algorithm can be used to remove biases from the feature representation, and thereby improve classification accuracies, when training networks on extremely biased datasets. (2) An ancestral origin database of 14,000 images of individuals from East Asia, the Indian subcontinent, sub-Saharan Africa, and Western Europe. We demonstrate on this dataset, for a number of facial attribute classification tasks, that we are able to remove racial biases from the network feature representation.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available