3.8 Proceedings Paper

InsideBias: Measuring Bias in Deep Networks and Application to Face Gender Biometrics

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/ICPR48806.2021.9412443

Keywords

-

Funding

  1. project BIBECA [RTI2018-101248-B-I00 MINECO/FEDER]
  2. project TRESPASS [MSCA-ITN-2019-860813]
  3. project PRIMA [MSCA-ITN-2019860315]
  4. Accenture
  5. Spanish CAM

Ask authors/readers for more resources

This work explores biases in learning processes based on deep neural network architectures by analyzing the effects of bias on deep learning through a toy example and a gender detection case study. It proposes a novel method called Inside Bias for detecting biased models based on how they represent information rather than their performance.
This work explores the biases in learning processes based on deep neural network architectures. We analyze how bias affects deep learning processes through a toy example using the MNIST database and a case study in gender detection from face images. We employ two gender detection models based on popular deep neural networks. We present a comprehensive analysis of bias effects when using an unbalanced training dataset on the features learned by the models. We show how bias impacts in the activations of gender detection models based on face images. We finally propose Inside Bias, a novel method to detect biased models. InsideBias is based on how the models represent the information instead of how they perform, which is the normal practice in other existing methods for bias detection. Our strategy with InsideBias allows to detect biased models with very few samples (only 15 images in our case study). Our experiments include 72K face images from 24K identities and 3 ethnic groups.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available