4.7 Article

Deep learning for understanding multilabel imbalanced Chest X-ray datasets

Publisher

ELSEVIER
DOI: 10.1016/j.future.2023.03.005

Keywords

Convolutional neural networks; Chest X-rays; Explainable AI; Ensemble Methodology

Ask authors/readers for more resources

Convolutional neural networks (CNNs) have dominated computer vision field due to their feature extraction ability and excellent performance in classification problems. However, they are considered black-box algorithms and lack interpretability. This paper proposes a deep learning methodology for imbalanced, multilabel chest X-ray datasets, establishing a baseline for the underutilized PadChest dataset and introducing a new explainable AI technique based on heatmaps.
Over the last few years, convolutional neural networks (CNNs) have dominated the field of computer vision thanks to their ability to extract features and their outstanding performance in classification problems, for example in the automatic analysis of X-rays. Unfortunately, these neural networks are considered black-box algorithms, i.e. it is impossible to understand how the algorithm has achieved the final result. To apply these algorithms in different fields and test how the methodology works, we need to use eXplainable AI techniques. Most of the work in the medical field focuses on binary or multiclass classification problems. However, in many real-life situations, such as chest X-rays, radiological signs of different diseases can appear at the same time. This gives rise to what is known as multilabel classification problems. A disadvantage of these tasks is class imbalance, i.e. different labels do not have the same number of samples. The main contribution of this paper is a Deep Learning methodology for imbalanced, multilabel chest X-ray datasets. It establishes a baseline for the currently underutilised PadChest dataset and a new eXplainable AI technique based on heatmaps. This technique also includes probabilities and inter-model matching. The results of our system are promising, especially considering the number of labels used. Furthermore, the heatmaps match the expected areas, i.e. they mark the areas that an expert would use to make a decision. (c) 2023 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available