4.6 Article

Multi-Task Pre-Training of Deep Neural Networks for Digital Pathology

Journal

IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS
Volume 25, Issue 2, Pages 412-421

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JBHI.2020.2992878

Keywords

Deep learning; multi-task learning; digital pathology; transfer learning

Funding

  1. ULiege
  2. Wallonia
  3. Belspo
  4. IDEES
  5. European Regional Development Fund (ERDF)

Ask authors/readers for more resources

In this study, multi-task learning was explored as a method for pre-training models for classification tasks in digital pathology. By assembling and transforming multiple datasets, a pool of 22 classification tasks and nearly 900k images was successfully created. Experimental results showed that our models either significantly outperformed ImageNet pre-trained models or provided comparable performance on different target tasks.
In this work, we investigate multi-task learning as a way of pre-training models for classification tasks in digital pathology. It is motivated by the fact that many small and medium-size datasets have been released by the community over the years whereas there is no large scale dataset similar to ImageNet in the domain. We first assemble and transform many digital pathology datasets into a pool of 22 classification tasks and almost 900k images. Then, we propose a simple architecture and training scheme for creating a transferable model and a robust evaluation and selection protocol in order to evaluate our method. Depending on the target task, we show that our models used as feature extractors either improve significantly over ImageNet pre-trained models or provide comparable performance. Fine-tuning improves performance over feature extraction and is able to recover the lack of specificity of ImageNet features, as both pre-training sources yield comparable performance.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available