期刊
SENSORS
卷 19, 期 13, 页码 -出版社
MDPI
DOI: 10.3390/s19132969
关键词
explainable AI; deep learning; medical data; lymph node metastases
资金
- National Council of Scientific Research and Development (CNPq)
- Foundation for the Support of Research of the State of Rio de Janeiro (FAPERJ)
An application of explainable artificial intelligence on medical data is presented. There is an increasing demand in machine learning literature for such explainable models in health-related applications. This work aims to generate explanations on how a Convolutional Neural Network (CNN) detects tumor tissue in patches extracted from histology whole slide images. This is achieved using the locally-interpretable model-agnostic explanations methodology. Two publicly-available convolutional neural networks trained on the Patch Camelyon Benchmark are analyzed. Three common segmentation algorithms are compared for superpixel generation, and a fourth simpler parameter-free segmentation algorithm is proposed. The main characteristics of the explanations are discussed, as well as the key patterns identified in true positive predictions. The results are compared to medical annotations and literature and suggest that the CNN predictions follow at least some aspects of human expert knowledge.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据