3.9 Article

Equivalences of weighted kappas for multiple raters

Journal

STATISTICAL METHODOLOGY
Volume 9, Issue 3, Pages 407-422

Publisher

ELSEVIER SCIENCE BV
DOI: 10.1016/j.stamet.2011.11.001

Keywords

Inter-rater reliability; Ordinal agreement; g-agreement; Multiple raters; Cohen's kappa; Cohen's weighted kappa; Hubert's kappa; Mielke, Berry and Johnston's weighted kappa

Funding

  1. Netherlands Organisation for Scientific Research [451-11-026]

Ask authors/readers for more resources

Cohen's unweighted kappa and weighted kappa are popular descriptive statistics for measuring agreement between two raters on a categorical scale. With m >= 3 raters, there are several views in the literature on how to define agreement. We consider a family of weighted kappas for multiple raters using the concept of g-agreement (g = 2, 3, ... , m) which refers to the situation in which it is decided that there is agreement if g out of m raters assign an object to the same category. Given m raters, we may formulate m - 1 weighted kappas in this family, one for each type of g-agreement. We show that the m - 1 weighted kappas coincide if we use the weighting scheme proposed by Mielke et al. (2007) [31]. (C) 2011 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.9
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available