3.8 Proceedings Paper

Omni-Supervised Learning: Scaling Up to Large Unlabelled Medical Datasets

Publisher

SPRINGER INTERNATIONAL PUBLISHING AG
DOI: 10.1007/978-3-030-00928-1_65

Keywords

-

Funding

  1. National Institutes of Health (NIH) through National Institute on Alcohol Abuse and Alcoholism (NIAAA) [2 U01 AA014809-14]
  2. Royal Academy of Engineering under the Engineering for Development Research Fellowship Scheme
  3. EPSRC Programme Grant Seebibyte [EP/M013774/1]
  4. EPSRC [EP/M013774/1] Funding Source: UKRI

Ask authors/readers for more resources

Two major bottlenecks in increasing algorithmic performance in the field of medical imaging analysis are the typically limited size of datasets and the shortage of expert labels for large datasets. This paper investigates approaches to overcome the latter via omni-supervised learning: a special case of semi-supervised learning. Our approach seeks to exploit a small annotated dataset and iteratively increase model performance by scaling up to refine the model using a large set of unlabelled data. By fusing predictions of perturbed inputs, the method generates new training annotations without human intervention. We demonstrate the effectiveness of the proposed framework to localize multiple structures in a 3D US dataset of 4044 fetal brain volumes with an initial expert annotation of just 200 volumes (5% in total) in training. Results show that structure localization error was reduced from 2.07 +/- 1.65mm to 1.76 +/- 1.35mm on the hold-out validation set.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available