Journal
PATTERNS
Volume 2, Issue 10, Pages -Publisher
CELL PRESS
DOI: 10.1016/j.patter.2021.100347
Keywords
-
Categories
Funding
- Mozilla Foundation
- Swiss National Science Foundation [320030_188737]
- Interfaculty Research Cooperation ``Decoding Sleep: From Neurons to Health & Mind'' of the University of Bern
- Swiss National Science Foundation (SNF) [320030_188737] Funding Source: Swiss National Science Foundation (SNF)
Ask authors/readers for more resources
Artificial intelligence has great potential in clinical decision making, but algorithmic bias is a major challenge that needs to be addressed. If training data does not represent population variability, AI is at risk of reinforcing bias, leading to serious consequences.
Artificial intelligence (AI) has an astonishing potential in assisting clinical decision making and revolutionizing the field of health care. A major open challenge that AI will need to address before its integration in the clinical routine is that of algorithmic bias. Most AI algorithms need big datasets to learn from, but several groups of the human population have a long history of being absent or misrepresented in existing biomedical datasets. If the training data is misrepresentative of the population variability, AI is prone to reinforcing bias, which can lead to fatal outcomes, misdiagnoses, and lack of generalization. Here, we describe the challenges in rendering AI algorithms fairer, and we propose concrete steps for addressing bias using tools from the field of open science.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available