4.7 Article

One model is all you need: Multi-task learning enables simultaneous histology image segmentation and classification

Journal

MEDICAL IMAGE ANALYSIS
Volume 83, Issue -, Pages -

Publisher

ELSEVIER
DOI: 10.1016/j.media.2022.102685

Keywords

Computational pathology; Multi-task learning; Deep learning

Ask authors/readers for more resources

The recent advances in deep learning have greatly improved the performance of image analysis for digitised pathology slides. However, these deep models are often trained for a single task and require a large amount of training data. In this paper, a multi-task learning approach is proposed to address these issues by leveraging data from multiple sources and achieving simultaneous prediction for different tasks. The learned representation is also shown to be transferable for additional tasks.
The recent surge in performance for image analysis of digitised pathology slides can largely be attributed to the advances in deep learning. Deep models can be used to initially localise various structures in the tissue and hence facilitate the extraction of interpretable features for biomarker discovery. However, these models are typically trained for a single task and therefore scale poorly as we wish to adapt the model for an increasing number of different tasks. Also, supervised deep learning models are very data hungry and therefore rely on large amounts of training data to perform well. In this paper, we present a multi-task learning approach for segmentation and classification of nuclei, glands, lumina and different tissue regions that leverages data from multiple independent data sources. While ensuring that our tasks are aligned by the same tissue type and resolution, we enable meaningful simultaneous prediction with a single network. As a result of feature sharing, we also show that the learned representation can be used to improve the performance of additional tasks via transfer learning, including nuclear classification and signet ring cell detection. As part of this work, we train our developed Cerberus model on a huge amount of data, consisting of over 600 thousand objects for segmentation and 440 thousand patches for classification. We use our approach to process 599 colorectal whole-slide images from TCGA, where we localise 377 million, 900 thousand and 2.1 million nuclei, glands and lumina respectively. We make this resource available to remove a major barrier in the development of explainable models for computational pathology.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available