Journal
PHARMACEUTICAL STATISTICS
Volume 14, Issue 1, Pages 74-78Publisher
WILEY-BLACKWELL
DOI: 10.1002/pst.1659
Keywords
kappa statistic; concordance; agreement; PABAK
Categories
Funding
- National Institute for Health Research [RMFI-2013-04-011]
- National Institute for Health Research [NIHR-RMFI-2013-04-011-101] Funding Source: researchfish
Ask authors/readers for more resources
It is often of interest to measure the agreement between a number of raters when an outcome is nominal or ordinal. The kappa statistic is used as a measure of agreement. The statistic is highly sensitive to the distribution of the marginal totals and can produce unreliable results. Other statistics such as the proportion of concordance, maximum attainable kappa and prevalence and bias adjusted kappa should be considered to indicate how well the kappa statistic represents agreement in the data. Each kappa should be considered and interpreted based on the context of the data being analysed. Copyright (c) 2014 JohnWiley & Sons, Ltd.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available