4.6 Article

Gender Bias in Artificial Intelligence: Severity Prediction at an Early Stage of COVID-19

Journal

FRONTIERS IN PHYSIOLOGY
Volume 12, Issue -, Pages -

Publisher

FRONTIERS MEDIA SA
DOI: 10.3389/fphys.2021.778720

Keywords

COVID-19; severity prediction; artificial intelligence bias; gender dependent bias; feature importance

Categories

Funding

  1. Korea Medical Device Development Fund grant - Korean Government (Ministry of Science and ICT) [NRF-2020R1A2C1014829, KMDF_PR_20200901_0095]
  2. Korea Medical Device Development Fund grant - Korean Government (Ministry of Trade, Industry and Energy) [NRF-2020R1A2C1014829, KMDF_PR_20200901_0095]
  3. Korea Medical Device Development Fund grant - Korean Government (Ministry of Health Welfare) [NRF-2020R1A2C1014829, KMDF_PR_20200901_0095]
  4. Korea Medical Device Development Fund grant - Korean Government (Ministry of Food and Drug Safety) [NRF-2020R1A2C1014829, KMDF_PR_20200901_0095]

Ask authors/readers for more resources

This study investigates the model bias that arises when training AI models using datasets from only one gender in medical domains. The findings show that training an AI model with gender-specific data can lead to decreased accuracy when applied to testing data of the opposite gender, highlighting the importance of mitigating gender bias in AI models for accurate predictions in healthcare applications.
Artificial intelligence (AI) technologies have been applied in various medical domains to predict patient outcomes with high accuracy. As AI becomes more widely adopted, the problem of model bias is increasingly apparent. In this study, we investigate the model bias that can occur when training a model using datasets for only one particular gender and aim to present new insights into the bias issue. For the investigation, we considered an AI model that predicts severity at an early stage based on the medical records of coronavirus disease (COVID-19) patients. For 5,601 confirmed COVID-19 patients, we used 37 medical records, namely, basic patient information, physical index, initial examination findings, clinical findings, comorbidity diseases, and general blood test results at an early stage. To investigate the gender-based AI model bias, we trained and evaluated two separate models-one that was trained using only the male group, and the other using only the female group. When the model trained by the male-group data was applied to the female testing data, the overall accuracy decreased-sensitivity from 0.93 to 0.86, specificity from 0.92 to 0.86, accuracy from 0.92 to 0.86, balanced accuracy from 0.93 to 0.86, and area under the curve (AUC) from 0.97 to 0.94. Similarly, when the model trained by the female-group data was applied to the male testing data, once again, the overall accuracy decreased-sensitivity from 0.97 to 0.90, specificity from 0.96 to 0.91, accuracy from 0.96 to 0.91, balanced accuracy from 0.96 to 0.90, and AUC from 0.97 to 0.95. Furthermore, when we evaluated each gender-dependent model with the test data from the same gender used for training, the resultant accuracy was also lower than that from the unbiased model.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available