4.7 Article

U-LanD: Uncertainty-Driven Video Landmark Detection

Journal

IEEE TRANSACTIONS ON MEDICAL IMAGING
Volume 41, Issue 4, Pages 793-804

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TMI.2021.3123547

Keywords

Deep learning; echocardiography; landmark detection; sparse training labels; uncertainty estimation; video analysis

Funding

  1. Natural Sciences and Engineering Research Council (NSERC) of Canada
  2. Canadian Institutes of Health Research (CIHR)
  3. Vancouver General Hospital
  4. University of British Columbia Echocardiography Laboratories
  5. CIHR-NSERC Grant

Ask authors/readers for more resources

This paper presents U-LanD, a framework for automatically detecting landmarks on key frames of videos by utilizing the uncertainty of landmark prediction. By leveraging the observation that a deep Bayesian landmark detector trained solely on key video frames has lower predictive uncertainty compared to other frames, U-LanD uses this unsupervised signal to recognize key frames and detect landmarks. The framework is tested on ultrasound imaging videos of the heart, where sparse and noisy clinical labels are available for only one frame in each video. Results show that U-LanD outperforms the state-of-the-art non-Bayesian method by a significant margin of 42% in R-2 score, without increasing the model size.
This paper presents U-LanD, a framework for automatic detection of landmarks on key frames of the video by leveraging the uncertainty of landmark prediction. We tackle a specifically challenging problem, where training labels are noisy and highly sparse. U-LanD builds upon a pivotal observation: a deep Bayesian landmark detector solely trained on key video frames, has significantly lower predictive uncertainty on those frames vs. other frames in videos. We use this observation as an unsupervised signal to automatically recognize key frames on which we detect landmarks. As a test-bed for our framework, we use ultrasound imaging videos of the heart, where sparse and noisy clinical labels are only available for a single frame in each video. Using data from 4,493 patients, we demonstrate that U-LanD can exceedingly outperform the state-of-the-art non-Bayesian counterpart by a noticeable absolute margin of 42% in R-2 score, with almost no overhead imposed on the model size.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available