4.7 Article

ELMGAN: A GAN-based efficient lightweight multi-scale-feature-fusion multi-task model

Journal

KNOWLEDGE-BASED SYSTEMS
Volume 252, Issue -, Pages -

Publisher

ELSEVIER
DOI: 10.1016/j.knosys.2022.109434

Keywords

Convolutional neural network; Generative adversarial networks; Cell segmentation; Cell counting

Funding

  1. Medical Research Council Confidence in Concept Award, UK [MC_PC_17171]
  2. Royal Society International Exchanges Cost Share Award, UK [RP202G0230]
  3. British Heart Foundation Accelerator Award, UK [AA/18/3/34220]
  4. Hope Foundation for Cancer Research, UK [RM60G0680]
  5. Global Challenges Research Fund (GCRF), UK [P202PF11]
  6. Sino-UK Industrial Fund, UK [RP202G0289]
  7. Data Science Enhancement Fund, UK [P202RE237]
  8. LIAS Pioneering Partnerships award, UK [P202ED10]

Ask authors/readers for more resources

Cell segmentation and counting are important and time-consuming steps in biomedical research. Traditional counting methods require exact cell locations, but there are few datasets with detailed object coordinates. To overcome this, we propose a GAN-based multi-task model and a novel loss function. Our method achieves excellent results in cell counting and segmentation, and significantly improves image processing speed.
Cell segmentation and counting is a time-consuming and important experimental step in traditional biomedical research. Many current counting methods are Point-based methods which require exact cell locations. However, there are few such cell datasets with detailed object coordinates. Most existing cell datasets only have the total number of cells and a global segmentation annotation. To effectively use existing datasets, we divide the cell counting task into the cell's number prediction and cell segmentation. We propose a GAN-based efficient lightweight multi-scale-feature-fusion multi-task model (ELMGAN). To coordinate the learning of these two tasks, we propose a Norm-Combined Hybrid loss function (NH loss) and use the method of the generative adversarial network to train our networks. We propose a new Fold Beyond-nearest Upsampling method (FBU) in our lightweight and fast multi-scale-feature-fusion multi-task generator (LFMMG), which is twice as fast as the traditional interpolation upsampling method. We use multi-scale feature fusion technology to improve the quality of segmentation images. LFMMG reduces the number of parameters by nearly 50% compared with U-Net and gets better performance on cell segmentation. Compared with the traditional GAN model, our method improves the speed of image processing by nearly ten times. In addition, we also propose a Coordinated Multitasking Training Discriminator (CMTD) to refine the accuracy of the details of the features. Our method achieves non-Point-based counting that no longer needs to annotate the exact position of each cell in the image during the training and achieves excellent results in cell counting and segmentation. (c) 2022 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available