Journal
HEALTH SERVICES AND OUTCOMES RESEARCH METHODOLOGY
Volume 11, Issue 3-4, Pages 145-163Publisher
SPRINGER
DOI: 10.1007/s10742-011-0077-3
Keywords
Cohen's kappa; Inter-rater reliability; Meta-analysis; Generalizability
Categories
Ask authors/readers for more resources
Cohen's kappa is the most important and most widely accepted measure of inter-rater reliability when the outcome of interest is measured on a nominal scale. The estimates of Cohen's kappa usually vary from one study to another due to differences in study settings, test properties, rater characteristics and subject characteristics. This study proposes a formal statistical framework for meta-analysis of Cohen's kappa to describe the typical inter-rater reliability estimate across multiple studies, to quantify between-study variation and to evaluate the contribution of moderators to heterogeneity. To demonstrate the application of the proposed statistical framework, a meta-analysis of Cohen's kappa is conducted for pressure ulcer classification systems. Implications and directions for future research are discussed.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available