3.8 Proceedings Paper

MedAL: Accurate and Robust Deep Active Learning for Medical Image Analysis

Publisher

IEEE
DOI: 10.1109/ICMLA.2018.00078

Keywords

-

Funding

  1. European Regional Development Fund through the Operational Programme for Competitiveness and Internationalisation-COMPETE 2020 Programme
  2. Fundacao para a Ciencia e a Tecnologia [CMUP-ERI/TIC/0028/2014]

Ask authors/readers for more resources

Deep learning models have been successfully used in medical image analysis problems but they require a large amount of labeled images to obtain good performance. However, such large labeled datasets are costly to acquire. Active learning techniques can be used to minimize the number of required training labels while maximizing the model's performance. In this work, we propose a novel sampling method that queries the unlabeled examples that maximize the average distance to all training set examples in a learned feature space. We then extend our sampling method to define a better initial training set, without the need for a trained model, by using Oriented FAST and Rotated BRIEF (ORB) feature descriptors. We validate MedAL on 3 medical image datasets and show that our method is robust to different dataset properties. MedAL is also efficient, achieving 80% accuracy on the task of Diabetic Retinopathy detection using only 425 labeled images, corresponding to a 32% reduction in the number of required labeled examples compared to the standard uncertainty sampling technique, and a 40% reduction compared to random sampling.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available