4.6 Article

Opening the black box of AI-Medicine

Journal

JOURNAL OF GASTROENTEROLOGY AND HEPATOLOGY
Volume 36, Issue 3, Pages 581-584

Publisher

WILEY
DOI: 10.1111/jgh.15384

Keywords

black box; gastroenterology; medicine

Ask authors/readers for more resources

One of the biggest challenges in utilizing artificial intelligence (AI) in medicine is the lack of trust from both physicians and patients. Improving the interpretability of machine learning (ML) algorithms is crucial in successfully implementing AI in medicine. Opening the black box in AI medicine through a stepwise approach can help build trust and acceptance, ultimately advancing the development of AI technology in healthcare.
One of the biggest challenges of utilizing artificial intelligence (AI) in medicine is that physicians are reluctant to trust and adopt something that they do not fully understand and regarded as a black box. Machine Learning (ML) can assist in reading radiological, endoscopic and histological pictures, suggesting diagnosis and predict disease outcome, and even recommending therapy and surgical decisions. However, clinical adoption of these AI tools has been slow because of a lack of trust. Besides clinician's doubt, patients lacking confidence with AI-powered technologies also hamper development. While they may accept the reality that human errors can occur, little tolerance of machine error is anticipated. In order to implement AI medicine successfully, interpretability of ML algorithm needs to improve. Opening the black box in AI medicine needs to take a stepwise approach. Small steps of biological explanation and clinical experience in ML algorithm can help to build trust and acceptance. AI software developers will have to clearly demonstrate that when the ML technologies are integrated into the clinical decision-making process, they can actually help to improve clinical outcome. Enhancing interpretability of ML algorithm is a crucial step in adopting AI in medicine.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available