4.5 Article

Automated vision-based inspection of drilled CFRP composites using multi-light imaging and deep learning

期刊

出版社

ELSEVIER
DOI: 10.1016/j.cirpj.2021.07.015

关键词

Automatic inspection; Drilling damage; CFRP; Image processing; Deep learning; Convolutional neural networks

资金

  1. Canada's Mitacs Entrepreneurship Accelerate program [IT20382]
  2. University of Manitoba Graduate Fellowship (UMGF)

向作者/读者索取更多资源

The study proposes a novel and fully autonomous system for detecting and segmenting damages and cracks around drilled holes in CFRPs. The system includes an automated multi-light imaging end-effector, image processing steps, and a deep Fully Convolutional Network (FCN) with the U-Net architecture for pixel-wise semantic segmentation.
Inspection of drilled holes in aerospace Carbon Fiber Reinforced Polymers (CFRPs) is crucial to avoid failure of attachments and disintegration of aircraft structures. Vision-based systems can provide an efficient tool for rapid inspection of holes directly on production lines. However, automatic detection of damages in digital images is a challenging task due to the dark, textured, and semi-specular surfaces of CFRPs. This paper proposes a novel and fully autonomous system for accurate detection and segmentation of damages and cracks around drilled holes in CFRP composites. The proposed system comprises three modules: (1) An automated multi-light imaging end-effector is designed to sequentially illuminate the inspected hole from four different directions and capture images for each case. The four images are then fused to suppress the background and enhance the visibility of damages. (2) A series of image processing steps are proposed to automatically segment the hole profile, damage area, and crack lines in the fused images. The segmented images serve as labeled outputs (masks) in the following deep learning model. (3) A deep Fully Convolutional Network (FCN) with the U-Net architecture is designed and trained for pixel-wise semantic segmentation of hole images. Once trained, the U-Net model can reliably detect the hole profile, damages, and cracks directly from 'raw' multi-light images without any image processing required. Experimental tests show that the U-Net model can provide real-time and fully automated evaluation of the delamination factor with a maximum error of 5.4%. (C) 2021 CIRP.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据