4.5 Article

Performance of ChatGPT on Specialty Certificate Examination in Dermatology multiple-choice questions

Journal

CLINICAL AND EXPERIMENTAL DERMATOLOGY
Volume -, Issue -, Pages -

Publisher

OXFORD UNIV PRESS
DOI: 10.1093/ced/llad197

Keywords

-

Categories

Ask authors/readers for more resources

ChatGPT is a language model trained by OpenAI that can perform language-based tasks and answer multiple-choice questions. In the testing, ChatGPT-4 achieved a score of 90% in sample dermatology questions, meeting the passing grade. The use of advanced artificial intelligence in medicine has significant educational and clinical implications.
ChatGPT is a large language model trained on increasingly large datasets by OpenAI to perform language-based tasks. It is capable of answering multiple-choice questions, such as those posed by the Specialty Certificate Examination (SCE) in Dermatology. We asked two iterations of ChatGPT: ChatGPT-3.5 and ChatGPT-4 84 multiple-choice sample questions from the sample SCE in Dermatology question bank. ChatGPT-3.5 achieved an overall score of 63%, and ChatGPT-4 scored 90% (a significant improvement in performance; P < 0.001). The typical pass mark for the SCE in Dermatology is 70-72%. ChatGPT-4 is therefore capable of answering clinical questions and achieving a passing grade in these sample questions. There are many possible educational and clinical implications for increasingly advanced artificial intelligence (AI) and its use in medicine, including in the diagnosis of dermatological conditions. Such advances should be embraced provided that patient safety is a core tenet, and the limitations of AI in the nuances of complex clinical cases are recognized.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available