4.0 Article

Sequentially Determined Measures of Interobserver Agreement (Kappa) in Clinical Trials May Vary Independent of Changes in Observer Performance

期刊

出版社

SPRINGER HEIDELBERG
DOI: 10.1177/2168479019874059

关键词

clinical trials; interobserver agreement; Cohen kappa; repeated measures; biased estimator; simulation; central reading

向作者/读者索取更多资源

Background: Cohen's kappa is a statistic that estimates interobserver agreement. It was originally introduced to help develop diagnostic tests. Interpretative readings of 2 observers, for example, of a mammogram or other imaging, were compared at a single point in time. It is known that kappa depends on the prevalence of disease and that, therefore, kappas across different settings are hard to compare. Methods: Using simulation, we examine an analogous situation, not previously described, that occurs in clinical trials where sequential measurements are obtained to evaluate disease progression or clinical improvement over time. Results: We show that weighted kappa, used for multilevel outcomes, changes during the trial even if we keep the performance of the observer constant. Conclusions: Kappa and closely related measures can therefore only be used with great difficulty, if at all, in quality assurance in clinical trials.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.0
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据