4.7 Article

MG-Trans: Multi-Scale Graph Transformer With Information Bottleneck for Whole Slide Image Classification

期刊

IEEE TRANSACTIONS ON MEDICAL IMAGING
卷 42, 期 12, 页码 3871-3883

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TMI.2023.3313252

关键词

Transformers; Feature extraction; Training; Task analysis; Pathology; Cancer; Redundancy; Whole slide image analysis; multiple instance learning; vision transformer; information bottleneck

向作者/读者索取更多资源

This study proposes a Multi-scale Graph Transformer (MG-Trans) with an information bottleneck for processing megapixel-sized whole slide images in digital pathology. The MG-Trans overcomes the limitations of input redundancy and insufficient spatial relations modeling through patch anchoring, dynamic structure information learning, and multi-scale information bottleneck modules. The proposed method also introduces a semantic consistency loss to stabilize the model training.
Multiple instance learning (MIL)-based methods have become the mainstream for processing the megapixel-sized whole slide image (WSI) with pyramid structure in the field of digital pathology. The current MIL-based methods usually crop a large number of patches from WSI at the highest magnification, resulting in a lot of redundancy in the input and feature space. Moreover, the spatial relations between patches can not be sufficiently modeled, which may weaken the model's discriminative ability on fine-grained features. To solve the above limitations, we propose a Multi-scale Graph Transformer (MG-Trans) with information bottleneck for whole slide image classification. MG-Trans is composed of three modules: patch anchoring module (PAM), dynamic structure information learning module (SILM), and multi-scale information bottleneck module (MIBM). Specifically, PAM utilizes the class attention map generated from the multi-head self-attention of vision Transformer to identify and sample the informative patches. SILM explicitly introduces the local tissue structure information into the Transformer block to sufficiently model the spatial relations between patches. MIBM effectively fuses the multi-scale patch features by utilizing the principle of information bottleneck to generate a robust and compact bag-level representation. Besides, we also propose a semantic consistency loss to stabilize the training of the whole model. Extensive studies on three subtyping datasets and seven gene mutation detection datasets demonstrate the superiority of MG-Trans.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据