4.5 Article

Automated vision-based inspection of drilled CFRP composites using multi-light imaging and deep learning

Journal

Publisher

ELSEVIER
DOI: 10.1016/j.cirpj.2021.07.015

Keywords

Automatic inspection; Drilling damage; CFRP; Image processing; Deep learning; Convolutional neural networks

Funding

  1. Canada's Mitacs Entrepreneurship Accelerate program [IT20382]
  2. University of Manitoba Graduate Fellowship (UMGF)

Ask authors/readers for more resources

The study proposes a novel and fully autonomous system for detecting and segmenting damages and cracks around drilled holes in CFRPs. The system includes an automated multi-light imaging end-effector, image processing steps, and a deep Fully Convolutional Network (FCN) with the U-Net architecture for pixel-wise semantic segmentation.
Inspection of drilled holes in aerospace Carbon Fiber Reinforced Polymers (CFRPs) is crucial to avoid failure of attachments and disintegration of aircraft structures. Vision-based systems can provide an efficient tool for rapid inspection of holes directly on production lines. However, automatic detection of damages in digital images is a challenging task due to the dark, textured, and semi-specular surfaces of CFRPs. This paper proposes a novel and fully autonomous system for accurate detection and segmentation of damages and cracks around drilled holes in CFRP composites. The proposed system comprises three modules: (1) An automated multi-light imaging end-effector is designed to sequentially illuminate the inspected hole from four different directions and capture images for each case. The four images are then fused to suppress the background and enhance the visibility of damages. (2) A series of image processing steps are proposed to automatically segment the hole profile, damage area, and crack lines in the fused images. The segmented images serve as labeled outputs (masks) in the following deep learning model. (3) A deep Fully Convolutional Network (FCN) with the U-Net architecture is designed and trained for pixel-wise semantic segmentation of hole images. Once trained, the U-Net model can reliably detect the hole profile, damages, and cracks directly from 'raw' multi-light images without any image processing required. Experimental tests show that the U-Net model can provide real-time and fully automated evaluation of the delamination factor with a maximum error of 5.4%. (C) 2021 CIRP.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available