4.7 Article

Performance of ChatGPT on a Radiology Board-style Examination: Insights into Current Strengths and Limitations

期刊

RADIOLOGY
卷 307, 期 5, 页码 -

出版社

RADIOLOGICAL SOC NORTH AMERICA (RSNA)
DOI: 10.1148/radiol.230582

关键词

-

向作者/读者索取更多资源

ChatGPT is a powerful AI language model with potential in medical practice and education, but its performance in radiology is uncertain.
Background: ChatGPT is a powerful artificial intelligence large language model with great potential as a tool in medical practice and education, but its performance in radiology remains unclear. Purpose: To assess the performance of ChatGPT on radiology board-style examination questions without images and to explore its strengths and limitations.Materials and Methods: In this exploratory prospective study performed from February 25 to March 3, 2023, 150 multiple-choice ques-tions designed to match the style, content, and difficulty of the Canadian Royal College and American Board of Radiology examina-tions were grouped by question type (lower-order [recall, understanding] and higher-order [apply, analyze, synthesize] thinking) and topic (physics, clinical). The higher-order thinking questions were further subclassified by type (description of imaging findings, clinical management, application of concepts, calculation and classification, disease associations). ChatGPT performance was evaluated overall, by question type, and by topic. Confidence of language in responses was assessed. Univariable analysis was performed.Results: ChatGPT answered 69% of questions correctly (104 of 150). The model performed better on questions requiring lower-order thinking (84%, 51 of 61) than on those requiring higher-order thinking (60%, 53 of 89) (P = .002). When compared with lower-order questions, the model performed worse on questions involving description of imaging findings (61%, 28 of 46; P = .04), calculation and classification (25%, two of eight; P = .01), and application of concepts (30%, three of 10; P = .01). ChatGPT performed as well on higher-order clinical management questions (89%, 16 of 18) as on lower-order questions (P = .88). It performed worse on physics questions (40%, six of 15) than on clinical questions (73%, 98 of 135) (P = .02). ChatGPT used confident language consistently, even when incorrect (100%, 46 of 46).Conclusion: Despite no radiology-specific pretraining, ChatGPT nearly passed a radiology board-style examination without images; it performed well on lower-order thinking questions and clinical management questions but struggled with higher-order thinking questions involving description of imaging findings, calculation and classification, and application of concepts.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据