4.7 Article

Generalising from conventional pipelines using deep learning in high-throughput screening workflows

期刊

SCIENTIFIC REPORTS
卷 12, 期 1, 页码 -

出版社

NATURE PORTFOLIO
DOI: 10.1038/s41598-022-15623-7

关键词

-

资金

  1. Luxembourg National Research Fund (FNR) Grant PARK-QC DTU [PRIDE17/12244779/PARK-QC]
  2. Fondation du POlican de Mie et Pierre HippertFaber, under the aegis of the Fondation de Luxembourg

向作者/读者索取更多资源

The study focuses on addressing the challenges of data acquisition and image analysis in complex disease research. By combining traditional computer vision methods with deep learning, the research team successfully trained a deep learning network and improved the segmentation quality using automatically generated labels. The user-friendly graphical interface allows researchers to evaluate and correct the predictions. Furthermore, the study demonstrates the feasibility of training a deep learning solution on a large dataset of noisy labels.
The study of complex diseases relies on large amounts of data to build models toward precision medicine. Such data acquisition is feasible in the context of high-throughput screening, in which the quality of the results relies on the accuracy of the image analysis. Although state-of-the-art solutions for image segmentation employ deep learning approaches, the high cost of manually generating ground truth labels for model training hampers the day-to-day application in experimental laboratories. Alternatively, traditional computer vision-based solutions do not need expensive labels for their implementation. Our work combines both approaches by training a deep learning network using weak training labels automatically generated with conventional computer vision methods. Our network surpasses the conventional segmentation quality by generalising beyond noisy labels, providing a 25% increase of mean intersection over union, and simultaneously reducing the development and inference times. Our solution was embedded into an easy-to-use graphical user interface that allows researchers to assess the predictions and correct potential inaccuracies with minimal human input. To demonstrate the feasibility of training a deep learning solution on a large dataset of noisy labels automatically generated by a conventional pipeline, we compared our solution against the common approach of training a model from a small manually curated dataset by several experts. Our work suggests that humans perform better in context interpretation, such as error assessment, while computers outperform in pixel-by-pixel fine segmentation. Such pipelines are illustrated with a case study on image segmentation for autophagy events. This work aims for better translation of new technologies to real-world settings in microscopy-image analysis.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据