4.5 Article

When coders are reliable: The application of three measures to assess inter-rater reliability/agreement with doctor-patient communication data coded with the VR-CoDES

期刊

PATIENT EDUCATION AND COUNSELING
卷 82, 期 3, 页码 341-345

出版社

ELSEVIER IRELAND LTD
DOI: 10.1016/j.pec.2011.01.004

关键词

Inter-rater study; Kappa; Intraclass correlation coefficient; Sensitivity and specificity; VR-CoDES

向作者/读者索取更多资源

Objective: To investigate whether different measures of inter-rater reliability will compute. similar estimates with nominal data commonly encountered in communication studies. To make recommendations how reliability should be computed and described for communication coding instruments. Methods: The raw data from an inter-rater study with three coders were analysed with: Cohen's kappa. sensitivity and specificity measures. Fleiss's multirater kappa(j) and an intraclass correlation coefficient (ICC). Results: Minor differences were found between Cohen's kappa and an ICC model across paired data (largest margin = 0.01). There were negligible differences between the multirater estimates e.g. kappa(j) (0.52) and ICC (0.53). Sensitivity analyses were in general agreement with the multirater estimates. Conclusion: It is more practical to analyse nominal data with >2 raters with an appropriate model ICC for inter-rater studies, and little difference exists between Cohen's kappa or an ICC. Practice implication: Alternatives to Cohen's kappa are readily available, but researchers need to be aware of the different ICC definitions. An ICC model should be fully described in reports. Investigators are encouraged to supply confidence limits with inter-rater data, and to revisit guidance regarding the relative strengths of agreement of reliability coefficients. (C) 2011 Elsevier Ireland Ltd. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据