4.8 Article

Addressing fairness in artificial intelligence for medical imaging

Journal

NATURE COMMUNICATIONS
Volume 13, Issue 1, Pages -

Publisher

NATURE PORTFOLIO
DOI: 10.1038/s41467-022-32186-3

Keywords

-

Funding

  1. Fundar foundation
  2. Argentina's National Scientific and Technical Research Council (CONICET)
  3. ARPH.AI project - International Development Research Center (IDRC) [109584]
  4. Swedish International Development Cooperation Agency (SIDA)
  5. Universidad Nacional del Litoral [CAID-PIC-50220140100084LI, 50620190100145LI]
  6. Agencia Nacional de Promocion de la Investigacion, el Desarrollo Tecnologico y la Innovacion [PICT 2018-3907, PRH 2017-0003]
  7. Santa Fe Agency for Science, Technology and Innovation [IO-138-19]

Ask authors/readers for more resources

AI systems in the field of medical imaging can exhibit unfair biases, and it is important to address the meaning of fairness and potential sources of biases, as well as implement strategies to mitigate them. An analysis of the current state of the field reveals strengths and areas for improvement, along with challenges and opportunities.
A plethora of work has shown that AI systems can systematically and unfairly be biased against certain populations in multiple scenarios. The field of medical imaging, where AI systems are beginning to be increasingly adopted, is no exception. Here we discuss the meaning of fairness in this area and comment on the potential sources of biases, as well as the strategies available to mitigate them. Finally, we analyze the current state of the field, identifying strengths and highlighting areas of vacancy, challenges and opportunities that lie ahead.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available