4.6 Review

Assessments of Physicians' Electrocardiogram Interpretation Skill: A Systematic Review

期刊

ACADEMIC MEDICINE
卷 97, 期 4, 页码 603-615

出版社

LIPPINCOTT WILLIAMS & WILKINS
DOI: 10.1097/ACM.0000000000004140

关键词

-

资金

  1. U.S. Department of Defense Medical Simulation and Information Sciences Research Program [W81XWH-16-1-0797]

向作者/读者索取更多资源

The study identified features of instruments, test procedures, study design, and validity evidence in published studies of electrocardiogram (ECG) skill assessments. Results showed that ECG interpretation skill assessments consist of idiosyncratic instruments with limited interpretability. Best practices were suggested to improve the validity of assessments.
Purpose To identify features of instruments, test procedures, study design, and validity evidence in published studies of electrocardiogram (ECG) skill assessments. Method The authors conducted a systematic review, searching MEDLINE, Embase, Cochrane CENTRAL, PsycINFO, CINAHL, ERIC, and Web of Science databases in February 2020 for studies that assessed the ECG interpretation skill of physicians or medical students. Two authors independently screened articles for inclusion and extracted information on test features, study design, risk of bias, and validity evidence. Results The authors found 85 eligible studies. Participants included medical students (42 studies), postgraduate physicians (48 studies), and practicing physicians (13 studies). ECG selection criteria were infrequently reported: 25 studies (29%) selected single-diagnosis or straightforward ECGs; 5 (6%) selected complex cases. ECGs were selected by generalists (15 studies [18%]), cardiologists (10 studies [12%]), or unspecified experts (4 studies [5%]). The median number of ECGs per test was 10. The scoring rubric was defined by 2 or more experts in 32 studies (38%), by 1 expert in 5 (6%), and using clinical data in 5 (6%). Scoring was performed by a human rater in 34 studies (40%) and by computer in 7 (8%). Study methods were appraised as low risk of selection bias in 16 studies (19%), participant flow bias in 59 (69%), instrument conduct and scoring bias in 20 (24%), and applicability problems in 56 (66%). Evidence of test score validity was reported infrequently, namely evidence of content (39 studies [46%]), internal structure (11 [13%]), relations with other variables (10 [12%]), response process (2 [2%]), and consequences (3 [4%]). Conclusions ECG interpretation skill assessments consist of idiosyncratic instruments that are too short, composed of items of obscure provenance, with incompletely specified answers, graded by individuals with underreported credentials, yielding scores with limited interpretability. The authors suggest several best practices.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据