4.5 Article

Development and Validation of the Metric-Based Assessment of a Robotic Dissection Task on an Avian Model

Journal

JOURNAL OF SURGICAL RESEARCH
Volume 277, Issue -, Pages 224-234

Publisher

ACADEMIC PRESS INC ELSEVIER SCIENCE
DOI: 10.1016/j.jss.2022.02.056

Keywords

Construct validation; Dissection skills; Proficiency-based metrics; Robotic surgical training; Construct validation; Dissection skills; Proficiency-based metrics; Robotic surgical training

Categories

Ask authors/readers for more resources

This study developed and validated performance metrics for a robotic dissection task using a chicken model, demonstrating its reliability and validity. The results showed that the expert group performed better than the novice group in terms of time and errors.
Introduction: The introduction of robot-assisted surgical devices requires the application of objective performance metrics to verify performance levels.Objective: To develop and validate (face, content, response process, and construct) the performance metrics for a robotic dissection task using a chicken model.Methods: In a procedure characterization, we developed the performance metrics (i.e., procedure steps, errors, and critical errors) for a robotic dissection task, using a chicken model. In a modified Delphi panel, 14 experts from four European Union countries agreed on the steps, errors, and critical errors (CEs) of the task. Six experienced surgeons and eight novice urology surgeons performed the robotic dissection task twice on the chicken model. In the Delphi meeting, 100% consensus was reached on five procedure steps, 15 errors and two CEs. Novice surgeons took 20 min to complete the task on trial 1 and 14 min during trial two, whereas experts took 8.2 min and 6.5 min. On average, the Expert Group completed the task 56% faster than the Novice Group and made 46% fewer performance errors.Results: Sensitivity and specificity for procedure errors and time were excellent to good (i.e., 1.0-0.91) but poor (i.e., 0.5) for step metrics. The mean interrater reliability for the assess-ments by two robotic surgeons was 0.91 (Expert Group inter-rater reliability = 0.92 and Novice Group = 0.9).Conclusions: We report evidence which supports the demonstration of face, content, and construct validity for a standard and replicable basic robotic dissection task on the chicken model. 2022 Elsevier Inc. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available