Journal
MEDICAL TEACHER
Volume 42, Issue 1, Pages 46-51Publisher
TAYLOR & FRANCIS LTD
DOI: 10.1080/0142159X.2019.1652260
Keywords
-
Funding
- Faculty of Medicine, University of Ottawa
Ask authors/readers for more resources
Background: It is a doctrine that OSCE checklists are not sensitive to increasing levels of expertise whereas rating scales are. This claim is based primarily on a study that used two psychiatry stations and it is not clear to what degree the finding generalizes to other clinical contexts. The purpose of our study was to reexamine the relationship between increasing training and scoring instruments within an OSCE. Approach: A 9-station OSCE progress test was administered to Internal Medicine residents in post-graduate years (PGY) 1-4. Residents were scored using checklists and rating scales. Standard scores from three administrations (27 stations) were analyzed. Findings: Only one station produced a result in which checklist scores did not increase as a function of training level, but the rating scales did. For 13 stations, scores increased as a function of PGY equally for both checklists and rating scales. Conclusion: Checklist scores were as sensitive to the level of training as rating scales for most stations, suggesting that checklists can capture increasing levels of expertise. The choice of which measure is used should be based on the purpose of the examination and not on a belief that one measure can better capture increases in expertise.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available