4.5 Article

Ethical AI in facial expression analysis: racial bias

Journal

SIGNAL IMAGE AND VIDEO PROCESSING
Volume 17, Issue 2, Pages 399-406

Publisher

SPRINGER LONDON LTD
DOI: 10.1007/s11760-022-02246-8

Keywords

Facial expression recognition (FER); Deep neural networks; Reaction emotion; LSTM

Ask authors/readers for more resources

Facial expression recognition using deep neural networks has become popular, but the datasets lack a balanced distribution of races, leading to potential biases. This study investigated racial bias in state-of-the-art facial expression recognition methods and found that they tend to be biased towards the races included in the training data. The bias increases with performance improvement if the training dataset is imbalanced.
Facial expression recognition using deep neural networks has become very popular due to their successful performances. However, the datasets used during the development and testing of these methods lack a balanced distribution of races among the sample images. This leaves a possibility of the methods being biased toward certain races. Therefore, a concern about fairness arises, and the lack of research aimed at investigating racial bias only increases the concern. On the other hand, such bias in the method would decrease the real-world performance due to the wrong generalization. For these reasons, in this study, we investigated the racial bias within popular state-of-the-art facial expression recognition methods such as Deep Emotion, Self-Cure Network, ResNet50, InceptionV3, and DenseNet121. We compiled an elaborated dataset with images of different races, cross-checked the bias for methods trained, and tested on images of people of other races. We observed that the methods are inclined towards the races included in the training data. Moreover, an increase in the performance increases the bias as well if the training dataset is imbalanced. Some methods can make up for the bias if enough variance is provided in the training set. However, this does not mitigate the bias completely. Our findings suggest that an unbiased performance can be obtained by adding the missing races into the training data equally.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available