4.6 Article

ChatGPT outscored human candidates in a virtual objective structured clinical examination in obstetrics and gynecology

期刊

出版社

MOSBY-ELSEVIER
DOI: 10.1016/j.ajog.2023.04.020

关键词

artificial intelligence; Chat Generative Pre-trained Transformer; objective structured clinical examination; obstetrics and gynecology; postgraduate specialty training; reasoning

向作者/读者索取更多资源

This study investigated the performance of ChatGPT in engaging with healthcare systems and completing a mock objective structured clinical examination. The results showed that ChatGPT outperformed human candidates in several knowledge areas and generated accurate and contextually relevant answers.
BACKGROUND: Natural language processing is a form of artificial intelligence that allows human users to interface with a machine without using complex codes. The ability of natural language processing systems, such as ChatGPT, to successfully engage with healthcare systems requiring fluid reasoning, specialist data interpretation, and empathetic communication in an unfamiliar and evolving environment is poorly studied. This study investigated whether the ChatGPT interface could engage with and complete a mock objective structured clinical examination simulating assessment for membership of the Royal College of Obstetricians and Gynaecologists.OBJECTIVE: This study aimed to determine whether ChatGPT, without additional training, would achieve a score at least equivalent to that achieved by human candidates who sat for virtual objective structured clinical examinations in Singapore.STUDY DESIGN: This study was conducted in 2 phases. In the first phase, a total of 7 structured discussion questions were selected from 2 historical cohorts (cohorts A and B) of objective structured clinical ex-amination questions. ChatGPT was examined using these questions and responses recorded in a script. Of note, 2 human candidates (acting as anonymizers) were examined on the same questions using videoconfer-encing, and their responses were transcribed verbatim into written scripts. The 3 sets of response scripts were mixed, and each set was allocated to 1 of 3 human actors. In the second phase, actors were used to presenting these scripts to examiners in response to the same examination questions. These responses were blind scored by 14 qualified examiners. ChatGPT scores were unblinded and compared with historical human candidate performance scores.RESULTS: The average score given to ChatGPT by 14 examiners was 77.2%. The average historical human score (n1/426 candidates) was 73.7 %. ChatGPT demonstrated sizable performance improvements over the average human candidate in several subject domains. The median time taken for ChatGPT to complete each station was 2.54 minutes, well before the 10 minutes allowed.CONCLUSION: ChatGPT generated factually accurate and contextually relevant structured discussion answers to complex and evolving clinical questions based on unfamiliar settings within a very short period. ChatGPT outperformed human candidates in several knowledge areas. Not all ex -aminers were able to discern between human and ChatGPT responses. Our data highlight the emergent ability of natural language processing models to demonstrate fluid reasoning in unfamiliar environments and successfully compete with human candidates that have undergone extensive specialist training.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据