4.6 Article

Transfer Learning Approach for Classification of Histopathology Whole Slide Images

Journal

SENSORS
Volume 21, Issue 16, Pages -

Publisher

MDPI
DOI: 10.3390/s21165361

Keywords

deep learning; transfer learning; histopathology

Funding

  1. Ministry of Education, Kingdom of Saudi Arabia through Najran University Institutional Funding Committee [NU/IFC/INT/01/008]

Ask authors/readers for more resources

The classification of pathology images is crucial for accurate disease analysis and patient treatment efficacy, but limited by a lack of large labeled datasets. The Kimia Path24 dataset was created specifically for histopathology image classification and retrieval, leading to significant accuracy improvements on the Inception-V3 and VGG-16 models through a transfer learning framework.
The classification of whole slide images (WSIs) provides physicians with an accurate analysis of diseases and also helps them to treat patients effectively. The classification can be linked to further detailed analysis and diagnosis. Deep learning (DL) has made significant advances in the medical industry, including the use of magnetic resonance imaging (MRI) scans, computerized tomography (CT) scans, and electrocardiograms (ECGs) to detect life-threatening diseases, including heart disease, cancer, and brain tumors. However, more advancement in the field of pathology is needed, but the main hurdle causing the slow progress is the shortage of large-labeled datasets of histopathology images to train the models. The Kimia Path24 dataset was particularly created for the classification and retrieval of histopathology images. It contains 23,916 histopathology patches with 24 tissue texture classes. A transfer learning-based framework is proposed and evaluated on two famous DL models, Inception-V3 and VGG-16. To improve the productivity of Inception-V3 and VGG-16, we used their pre-trained weights and concatenated these with an image vector, which is used as input for the training of the same architecture. Experiments show that the proposed innovation improves the accuracy of both famous models. The patch-to-scan accuracy of VGG-16 is improved from 0.65 to 0.77, and for the Inception-V3, it is improved from 0.74 to 0.79.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available