4.7 Article

Handcrafted Histological Transformer (H2T): Unsupervised representation of whole slide images

期刊

MEDICAL IMAGE ANALYSIS
卷 85, 期 -, 页码 -

出版社

ELSEVIER
DOI: 10.1016/j.media.2023.102743

关键词

Computational pathology; Unsupervised learning; Deep learning; WSI representation; Transformer

向作者/读者索取更多资源

Diagnostic, prognostic and therapeutic decision-making of cancer in pathology clinics can now be carried out based on analysis of multi-gigapixel tissue images, also known as whole-slide images (WSIs). Deep convolutional neural networks (CNNs) have been proposed to derive unsupervised WSI representations, but a major trade-off is the lack of interpretability. To address this, a handcrafted framework called Handcrafted Histological Transformer (H2T) is presented, which offers competitive performance and is faster than Transformer models.
Diagnostic, prognostic and therapeutic decision-making of cancer in pathology clinics can now be carried out based on analysis of multi-gigapixel tissue images, also known as whole-slide images (WSIs). Recently, deep convolutional neural networks (CNNs) have been proposed to derive unsupervised WSI representations; these are attractive as they rely less on expert annotation which is cumbersome. However, a major trade-off is that higher predictive power generally comes at the cost of interpretability, posing a challenge to their clinical use where transparency in decision-making is generally expected. To address this challenge, we present a handcrafted framework based on deep CNN for constructing holistic WSI-level representations. Building on recent findings about the internal working of the Transformer in the domain of natural language processing, we break down its processes and handcraft them into a more transparent framework that we term as the Handcrafted Histological Transformer or H2T. Based on our experiments involving various datasets consisting of a total of 10,042 WSIs, the results demonstrate that H2T based holistic WSI-level representations offer competitive performance compared to recent state-of-the-art methods and can be readily utilized for various downstream analysis tasks. Finally, our results demonstrate that the H2T framework can be up to 14 times faster than the Transformer models.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据