4.2 Article

Measures of Agreement and Concordance With Clinical Research Applications

Journal

STATISTICS IN BIOPHARMACEUTICAL RESEARCH
Volume 3, Issue 2, Pages 185-209

Publisher

AMER STATISTICAL ASSOC
DOI: 10.1198/sbr.2011.10019

Keywords

Interrater bias; Intraclass correlation; Kappa statistics; Pairwise disagreement; Reliability

Funding

  1. Korean government

Ask authors/readers for more resources

This article reviews measures of interrater agreement, including the complementary roles of tests for interrater bias and estimates of kappa statistics and intraclass correlation coefficients (ICCs), following the developments outlined by Landis and Koch (1977a; 1977b; 1977c). Category-specific measures of reliability, together with pairwise measures of disagreement among categories, are extended to accommodate multistage research designs involving unbalanced data. The covariance structure of these category-specific agreement and pairwise disagreement coefficients is summarized for use in modeling and hypothesis testing. These agreement/disagreement measures of intraclass/interclass correlation are then estimated within specialized software and illustrated for several clinical research applications. Further consideration is also given to measures of agreement for continuous data, namely the concordance correlation coefficient (CCC) developed originally by Lin (1989). An extension to this CCC was published by King and Chinchilli (2001b), yielding a generalized concordance correlation coefficient which is appropriate for both continuous and categorical data. This coefficient is reviewed and its use illustrated with clinical research data. Additional extensions to this CCC methodology for longitudinal studies are also summarized.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.2
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available