4.6 Article

Interrater reliability of measurements of comorbid illness should be reported

Journal

JOURNAL OF CLINICAL EPIDEMIOLOGY
Volume 59, Issue 9, Pages 926-933

Publisher

ELSEVIER SCIENCE INC
DOI: 10.1016/j.jclinepi.2006.02.006

Keywords

comorbidity; comorbidity index; reliability; clinical measurement; oncology

Ask authors/readers for more resources

Objective: Comorbidity indices are commonly used to stratify patients to control for treatment selection bias. The objectives here were to review the reporting of interrater reliability when studies use comorbidity indices in clinical research publications and to report the interrater reliability of four common indices in a particular research setting. Study Design and Setting: Four trained abstractors reviewed the same 40 charts of patients with squamous cell carcinoma of the head and neck from a regional cancer center. Scores for the Charlson Index, the Index of Co-existent Disease, the Cumulative Illness Rating Scale, and the Kaplan-Feinstein Classification were calculated, and the intraclass correlation coefficient was used to assess interrater reliability. Results: The details on the training of abstractors and the results of interrater reliability tests are not commonly reported. In our study setting, the Charlson Index had excellent reliability and the others had acceptable reliability. Conclusion: If the quality of a study using an index or scale is to be assessed, the reliability and interrater reliability of the score assignment process should be reported. (c) 2006 Elsevier Inc. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available