4.6 Review

Built to Last? Reproducibility and Reusability of Deep Learning Algorithms in Computational Pathology

期刊

MODERN PATHOLOGY
卷 37, 期 1, 页码 -

出版社

ELSEVIER SCIENCE INC
DOI: 10.1016/j.modpat.2023.100350

关键词

artificial intelligence; computational pathology; deep learning; histology/histopathology; reproducibility; reusability

向作者/读者索取更多资源

Computational pathology research driven by deep learning faces challenges in reproducibility and reusability. Codebase with good documentation and robustness and generalizability of models are crucial. The reuse of computational pathology algorithms is limited, and their application in clinical settings is even rarer. This study evaluates 160 peer-reviewed articles, providing criteria for data and code availability and statistical analysis of results.
Recent progress in computational pathology has been driven by deep learning. While code and data availability are essential to reproduce findings from preceding publications, ensuring a deep learning model's reusability is more challenging. For that, the codebase should be well-documented and easy to integrate into existing workflows and models should be robust toward noise and generalizable toward data from different sources. Strikingly, only a few computational pathology algorithms have been reused by other researchers so far, let alone employed in a clinical setting. To assess the current state of reproducibility and reusability of computational pathology algorithms, we evaluated peerreviewed articles available in PubMed, published between January 2019 and March 2021, in 5 use cases: stain normalization; tissue type segmentation; evaluation of cell-level features; genetic alteration prediction; and inference of grading, staging, and prognostic information. We compiled criteria for data and code availability and statistical result analysis and assessed them in 160 publications. We found that only one-quarter (41 of 160 publications) made code publicly available. Among these 41 studies, three-quarters (30 of 41) analyzed their results statistically, half of them (20 of 41) released their trained model weights, and approximately a third (16 of 41) used an independent cohort for evaluation. Our review is intended for both pathologists interested in deep learning and researchers applying algorithms to computational pathology challenges. We provide a detailed overview of publications with published code in the field, list reusable data handling tools, and provide criteria for reproducibility and reusability. (c) 2023 United States & Canadian Academy of Pathology. Published by Elsevier Inc. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据