4.3 Article

Interobserver Reliability of the Coronary Artery Disease Reporting and Data System in Clinical Practice

Journal

JOURNAL OF THORACIC IMAGING
Volume 36, Issue 2, Pages 95-101

Publisher

LIPPINCOTT WILLIAMS & WILKINS
DOI: 10.1097/RTI.0000000000000503

Keywords

Coronary Artery Disease Reporting and Data System; coronary artery disease; coronary computed tomography angiography; report standardization terminology; reliability

Ask authors/readers for more resources

This study aimed to evaluate the interobserver reproducibility between cardiothoracic radiologists using the CAD-RADS to describe atherosclerotic burden on coronary CT angiography. Results showed moderate to good interobserver reproducibility, with higher agreement among fellows than attending radiologists. However, there was a slight decrease in agreement for clinical management categories.
Purpose: This study aimed to evaluate interobserver reproducibility between cardiothoracic radiologists applying the Coronary Artery Disease Reporting and Data System (CAD-RADS) to describe atherosclerotic burden on coronary computed tomography angiography. Methods: Forty clinical computed tomography angiography cases were retrospectively and independently evaluated by 3 attending and 2 fellowship-trained cardiothoracic radiologists using the CAD-RADS lexicon. Radiologists were blinded to patient history and underwent initial training using a practice set of 10 subjects. Interobserver reproducibility was assessed using an intraclass correlation (ICC) on the basis of single-observer scores, absolute agreement, and a 2-way random-effects model. Nondiagnostic studies were excluded. ICC was also performed for CAD-RADS scores grouped by management recommendations for absent (0), nonobstructive (1 to 2), and potentially obstructive (3 to 5) CAD. Results: Interobserver reproducibility was moderate to good (ICC: 0.748, 95% confidence interval [CI]: 0.639-0.842, P<0.0001), with higher agreement among cardiothoracic radiology fellows (ICC: 0.853, 95% CI: 0.730-0.922, P<0.0001) than attending radiologists (ICC: 0.711, 95% CI: 0.568-0.824, P<0.0001). Interobserver reproducibility for clinical management categories was marginally decreased (ICC: 0.692, 95% CI: 0.570-0.802, P<0.0001). The average percent agreement between pairs of radiologists was 84.74%. Percent observer agreement was significantly reduced in the presence (M=62.22%, SD=15.17%) versus the absence (M=80.91%, SD=17.97%) of modifiers, t((37.95))=3.566, P=0.001. Conclusions: Interobserver reliability and agreement with the CAD-RADS terminology are moderate to good in clinical practice. However, further investigations are needed to characterize the causes of interobserver disagreement that may lead to differences in management recommendations.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.3
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available