Journal
AMERICAN JOURNAL OF SURGERY
Volume 211, Issue 2, Pages 398-404Publisher
EXCERPTA MEDICA INC-ELSEVIER SCIENCE INC
DOI: 10.1016/j.amjsurg.2015.09.005
Keywords
Surgical skills education; Psychomotor skills; Surgical skills assessment; Crowd sourced data
Categories
Funding
- C-SATS
Ask authors/readers for more resources
BACKGROUND: Objective, unbiased assessment of surgical skills remains a challenge in surgical education. We sought to evaluate the feasibility and reliability of Crowd-Sourced Assessment of Technical Skills. METHODS: Seven volunteer general surgery interns were given time for training and then testing, on laparoscopic peg transfer, precision cutting, and intracorporeal knot-tying. Six faculty experts (FEs) and 203 Amazon.com Mechanical Turk crowd workers (CWs) evaluated 21 deidentified video clips using the Global Objective Assessment of Laparoscopic Skills validated rating instrument. RESULTS: Within 19 hours and 15 minutes we received 662 eligible ratings from 203 CWs and 126 ratings from 6 FEs over 10 days. FE video ratings were of borderline internal consistency (Krippen-dorff's alpha = .55). FE ratings were highly correlated with CW ratings (Pearson's correlation coefficient = .78, P < .001). CONCLUSION: We propose the use of Crowd-Sourced Assessment of Technical Skills as a reliable, basic tool to standardize the evaluation of technical skills in general surgery. (C) 2016 Elsevier Inc. All rights reserved.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available