4.7 Article

Deep multimodal graph-based network for survival prediction from highly multiplexed images and patient variables

Journal

COMPUTERS IN BIOLOGY AND MEDICINE
Volume 154, Issue -, Pages -

Publisher

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.compbiomed.2023.106576

Keywords

Deep learning; Graph neural network (GNN); Imaging mass cytometry (IMC); Multimodal; Survival analysis

Ask authors/readers for more resources

The spatial architecture and phenotypic heterogeneity of tumor cells are associated with cancer prognosis and outcomes. Imaging mass cytometry captures high-dimensional maps of disease-relevant biomarkers at single-cell resolution, which can inform patient-specific prognosis. However, existing methods for survival prediction do not utilize spatial phenotype information at the single-cell level, and there is a lack of end-to-end methods that integrate imaging data with clinical information for improved accuracy. We propose a deep multimodal graph-based network that considers spatial phenotype information and clinical variables to enhance survival prediction, and demonstrate its effectiveness in breast cancer datasets.
The spatial architecture of the tumour microenvironment and phenotypic heterogeneity of tumour cells have been shown to be associated with cancer prognosis and clinical outcomes, including survival. Recent advances in highly multiplexed imaging, including imaging mass cytometry (IMC), capture spatially resolved, high -dimensional maps that quantify dozens of disease-relevant biomarkers at single-cell resolution, that contain potential to inform patient-specific prognosis. Existing automated methods for predicting survival, on the other hand, typically do not leverage spatial phenotype information captured at the single-cell level. Furthermore, there is no end-to-end method designed to leverage the rich information in whole IMC images and all marker channels, and aggregate this information with clinical data in a complementary manner to predict survival with enhanced accuracy. To that end, we present a deep multimodal graph-based network (DMGN) with two modules: (1) a multimodal graph-based module that considers relationships between spatial phenotype information in all image regions and all clinical variables adaptively, and (2) a clinical embedding module that automatically generates embeddings specialised for each clinical variable to enhance multimodal aggregation. We demonstrate that our modules are consistently effective at improving survival prediction performance using two public breast cancer datasets, and that our new approach can outperform state-of-the-art methods in survival prediction.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available