4.4 Article

Automated ASPECT scoring in acute ischemic stroke: comparison of three software tools

期刊

NEURORADIOLOGY
卷 62, 期 10, 页码 1231-1238

出版社

SPRINGER
DOI: 10.1007/s00234-020-02439-3

关键词

Aspects; Software anylsis; Diagnostics; Acute ischemic stroke

向作者/读者索取更多资源

Purpose Various software applications offer support in the diagnosis of acute ischemic stroke (AIS), yet it remains unclear whether the performance of these tools is comparable to each other. Our study aimed to evaluate three fully automated software applications for Alberta Stroke Program Early CT (ASPECT) scoring (Syngo.via Frontier ASPECT Score Prototype V2, Brainomix e-ASPECTS (R) and RAPID ASPECTS) in AIS patients. Methods Retrospectively, 131 patients with large vessel occlusion (LVO) of the middle cerebral artery or the internal carotid artery, who underwent endovascular therapy (EVT), were included. Pre-interventional non-enhanced CT (NECT) datasets were assessed in random order using the automated ASPECT software and by three experienced neuroradiologists in consensus. Interclass correlation coefficient (ICC), Bland-Altman, and receiver operating characteristics (ROC) were applied for statistical analysis. Results Median ASPECTS of the expert consensus reading was 8 (7-10). Highest correlation was between the expert read and Brainomix (r = 0.871 (0.818, 0.909), p < 0.001). Correlation between expert read and Frontier V2 (r = 0.801 (0.719, 0.859), p < 0.001) and between expert read and RAPID (r = 0.777 (0.568, 0.871), p < 0.001) was high, respectively. There was a high correlation among the software tools (Frontier V2 and Brainomix: r = 0.830 (0.760, 0.880), p < 0.001; Frontier V2 and RAPID: r = 0.847 (0.693, 0.913), p < 0.001; Brainomix and RAPID: r = 0.835 (0.512, 0.923), p < 0.001). An ROC curve analysis revealed comparable accuracy between the applications and expert consensus reading (Brainomix: AUC = 0.759 (0.670-0.848), p < 0.001; Frontier V2: AUC = 0.752 (0.660-0.843), p < 0.001; RAPID: AUC = 0.734 (0.634-0.831), p < 0.001). Conclusion Overall, there is a convincing yet developable grade of agreement between current ASPECT software evaluation tools and expert evaluation with regard to ASPECT assessment in AIS.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.4
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据