4.4 Article

ChatGPT Is Equivalent to First-Year Plastic Surgery Residents: Evaluation of ChatGPT on the Plastic Surgery In-service Examination

Journal

AESTHETIC SURGERY JOURNAL
Volume -, Issue -, Pages -

Publisher

OXFORD UNIV PRESS INC
DOI: 10.1093/asj/sjad130

Keywords

-

Categories

Ask authors/readers for more resources

ChatGPT, an artificial intelligence language model developed by OpenAI in 2022, was evaluated for its performance on the Plastic Surgery In-Service Examination. Results showed that ChatGPT performed at the level of a first-year resident but poorly compared to more advanced residents. Further research is needed to assess its efficacy despite its potential uses in healthcare and medical education.
Background ChatGPT is an artificial intelligence language model developed and released by OpenAI (San Francisco, CA) in late 2022. Objectives The aim of this study was to evaluate the performance of ChatGPT on the Plastic Surgery In-Service Examination and to compare it to residents' performance nationally. Methods The Plastic Surgery In-Service Examinations from 2018 to 2022 were used as a question source. For each question, the stem and all multiple-choice options were imported into ChatGPT. The 2022 examination was used to compare the performance of ChatGPT to plastic surgery residents nationally. Results In total, 1129 questions were included in the final analysis and ChatGPT answered 630 (55.8%) of these correctly. ChatGPT scored the highest on the 2021 exam (60.1%) and on the comprehensive section (58.7%). There were no significant differences regarding questions answered correctly among exam years or among the different exam sections. ChatGPT answered 57% of questions correctly on the 2022 exam. When compared to the performance of plastic surgery residents in 2022, ChatGPT would rank in the 49th percentile for first-year integrated plastic surgery residents, 13th percentile for second-year residents, 5th percentile for third- and fourth-year residents, and 0th percentile for fifth- and sixth-year residents. Conclusions ChatGPT performs at the level of a first-year resident on the Plastic Surgery In-Service Examination. However, it performed poorly when compared with residents in more advanced years of training. Although ChatGPT has many undeniable benefits and potential uses in the field of healthcare and medical education, it will require additional research to assess its efficacy.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.4
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available