3.8 Proceedings Paper

Harnessing Biomedical Literature to Calibrate Clinicians' Trust in AI Decision Support Systems

出版社

ASSOC COMPUTING MACHINERY
DOI: 10.1145/3544548.3581393

关键词

Clinical AI; XAI; Biomedical Literature; Qualitative Method

向作者/读者索取更多资源

Clinical decision support tools (DSTs) powered by AI can improve diagnostic and treatment decision-making, but AI models are not always correct. To address this, researchers investigated how clinicians validate each other's suggestions and designed a new DST that incorporates these interactions. The design uses GPT-3 to provide literature evidence showcasing the robustness and applicability of AI suggestions. A prototype study with clinicians demonstrated the promise of this approach and revealed new opportunities for design and research.
Clinical decision support tools (DSTs), powered by Artificial Intelligence (AI), promise to improve clinicians' diagnostic and treatment decision-making. However, no AI model is always correct. DSTs must enable clinicians to validate each AI suggestion, convincing them to take the correct suggestions while rejecting its errors. While prior work often tried to do so by explaining AI's inner workings or performance, we chose a different approach: We investigated how clinicians validated each other's suggestions in practice (often by referencing scientific literature) and designed a new DST that embraces these naturalistic interactions. This design uses GPT-3 to draw literature evidence that shows the AI suggestions' robustness and applicability (or the lack thereof). A prototyping study with clinicians from three disease areas proved this approach promising. Clinicians' interactions with the prototype also revealed new design and research opportunities around (1) harnessing the complementary strengths of literature-based and predictive decision supports; (2) mitigating risks of de-skilling clinicians; and (3) offering low-data decision support with literature.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据