4.7 Article

A deep learning framework for quality assessment and restoration in video endoscopy

期刊

MEDICAL IMAGE ANALYSIS
卷 68, 期 -, 页码 -

出版社

ELSEVIER
DOI: 10.1016/j.media.2020.101900

关键词

Video endoscopy; Multi-class artifact detection; Multi-class artifact segmentation; Convolution neural networks; Frame restoration

资金

  1. National Institute for Health Research (NIHR) Oxford Biomedical Research Centre (BRC)
  2. Health Data Research UK
  3. NIHR Oxford Biomedical Research Centre
  4. NIHR Oxford BRC
  5. Ludwig Institute for Cancer Research
  6. EPSRC [EP/M013774/1]
  7. EPSRC [EP/M013774/1] Funding Source: UKRI

向作者/读者索取更多资源

Endoscopy is a common medical imaging technique, but artifacts severely impact the interpretation and analysis of videos. This paper proposes a fully automatic framework that can detect and classify multiple artifacts, segment irregularly shaped artifact instances, and repair corrupted frames through restoration models, significantly improving previous methods. The framework achieves high detection accuracy and speed, while retaining more high-quality video frames.
Endoscopy is a routine imaging technique used for both diagnosis and minimally invasive surgical treatment. Artifacts such as motion blur, bubbles, specular reflections, floating objects and pixel saturation impede the visual interpretation and the automated analysis of endoscopy videos. Given the widespread use of endoscopy in different clinical applications, robust and reliable identification of such artifacts and the automated restoration of corrupted video frames is a fundamental medical imaging problem. Existing state-of-the-art methods only deal with the detection and restoration of selected artifacts. However, typically endoscopy videos contain numerous artifacts which motivates to establish a comprehensive solution. In this paper, a fully automatic framework is proposed that can: 1) detect and classify six different artifacts, 2) segment artifact instances that have indefinable shapes, 3) provide a quality score for each frame, and 4) restore partially corrupted frames. To detect and classify different artifacts, the proposed framework exploits fast, multi-scale and single stage convolution neural network detector. In addition, we use an encoder-decoder model for pixel-wise segmentation of irregular shaped artifacts. A quality score is introduced to assess video frame quality and to predict image restoration success. Generative adversarial networks with carefully chosen regularization and training strategies for discriminator-generator networks are finally used to restore corrupted frames. The detector yields the highest mean average precision (mAP) of 45.7 and 34.7, respectively for 25% and 50% IoU thresholds, and the lowest computational time of 88 ms allowing for near real-time processing. The restoration models for blind deblurring, saturation correction and inpainting demonstrate significant improvements over previous methods. On a set of 10 test videos, an average of 68.7% of video frames successfully passed the quality score (>= 0.9) after applying the proposed restoration framework thereby retaining 25% more frames compared to the raw videos. The importance of artifacts detection and their restoration on improved robustness of image analysis methods is also demonstrated in this work. (C) 2020 The Authors. Published by Elsevier B.V.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据