4.7 Article

Semi-supervised learning with GAN for automatic defect detection from images

Journal

AUTOMATION IN CONSTRUCTION
Volume 128, Issue -, Pages -

Publisher

ELSEVIER
DOI: 10.1016/j.autcon.2021.103764

Keywords

Generative adversarial network; Semi-supervised learning; Fully convolutional network; Defect detection

Funding

  1. Ministry of Education Tier 1 Grants, Singapore [04MNP000279C120, 04MNP002126C120]
  2. Nanyang Technological University, Singapore [04INS000423C120]

Ask authors/readers for more resources

This research introduces a semisupervised generative adversarial network (SSGAN) with two sub-networks for automatic defect detection, utilizing attention mechanism and dual loss functions to enhance segmentation performance and reduce data labeling efforts.
Towards the automatic defect detection from images, this research develops a semi-supervised generative adversarial network (SSGAN) with two sub-networks for more precise segmentation results at the pixel level. One is the segmentation network for the defect segmentation from labeled and non-labeled images, which is built on a dual attention mechanism. Specifically, the attention mechanism is employed to extract the rich and global representations of pixels in both the spatial and channel dimension for better feature representation. The other one is the fully convolutional discriminator (FCD) network, which employs two kinds of loss functions (the adversarial loss and the cross-entropy loss) to generate the confidential density maps of unlabeled images in a semi-supervised learning manner. In contrast to most existing methods heavily relying on labeled or weakly-labeled images, the developed SSGAN model can leverage unlabeled images to enhance the segmentation performance and alleviate the data labeling task. The effectiveness of the proposed SSGAN model is demonstrated in a public dataset with four classes of steel defects. In comparison with other state-of-the-art methods, our developed model using 1/8 and 1/4 labeled data can reach promising mean Intersection over Union (IoU) of 79.0% and 81.8%, respectively. Moreover, the proposed SSGAN is robust and flexible in the segmentation under various scenarios.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available