4.7 Article

ELMGAN: A GAN-based efficient lightweight multi-scale-feature-fusion multi-task model

期刊

KNOWLEDGE-BASED SYSTEMS
卷 252, 期 -, 页码 -

出版社

ELSEVIER
DOI: 10.1016/j.knosys.2022.109434

关键词

Convolutional neural network; Generative adversarial networks; Cell segmentation; Cell counting

资金

  1. Medical Research Council Confidence in Concept Award, UK [MC_PC_17171]
  2. Royal Society International Exchanges Cost Share Award, UK [RP202G0230]
  3. British Heart Foundation Accelerator Award, UK [AA/18/3/34220]
  4. Hope Foundation for Cancer Research, UK [RM60G0680]
  5. Global Challenges Research Fund (GCRF), UK [P202PF11]
  6. Sino-UK Industrial Fund, UK [RP202G0289]
  7. Data Science Enhancement Fund, UK [P202RE237]
  8. LIAS Pioneering Partnerships award, UK [P202ED10]

向作者/读者索取更多资源

Cell segmentation and counting are important and time-consuming steps in biomedical research. Traditional counting methods require exact cell locations, but there are few datasets with detailed object coordinates. To overcome this, we propose a GAN-based multi-task model and a novel loss function. Our method achieves excellent results in cell counting and segmentation, and significantly improves image processing speed.
Cell segmentation and counting is a time-consuming and important experimental step in traditional biomedical research. Many current counting methods are Point-based methods which require exact cell locations. However, there are few such cell datasets with detailed object coordinates. Most existing cell datasets only have the total number of cells and a global segmentation annotation. To effectively use existing datasets, we divide the cell counting task into the cell's number prediction and cell segmentation. We propose a GAN-based efficient lightweight multi-scale-feature-fusion multi-task model (ELMGAN). To coordinate the learning of these two tasks, we propose a Norm-Combined Hybrid loss function (NH loss) and use the method of the generative adversarial network to train our networks. We propose a new Fold Beyond-nearest Upsampling method (FBU) in our lightweight and fast multi-scale-feature-fusion multi-task generator (LFMMG), which is twice as fast as the traditional interpolation upsampling method. We use multi-scale feature fusion technology to improve the quality of segmentation images. LFMMG reduces the number of parameters by nearly 50% compared with U-Net and gets better performance on cell segmentation. Compared with the traditional GAN model, our method improves the speed of image processing by nearly ten times. In addition, we also propose a Coordinated Multitasking Training Discriminator (CMTD) to refine the accuracy of the details of the features. Our method achieves non-Point-based counting that no longer needs to annotate the exact position of each cell in the image during the training and achieves excellent results in cell counting and segmentation. (c) 2022 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据