4.7 Article

A crack-segmentation algorithm fusing transformers and convolutional neural networks for complex detection scenarios

Journal

AUTOMATION IN CONSTRUCTION
Volume 152, Issue -, Pages -

Publisher

ELSEVIER
DOI: 10.1016/j.autcon.2023.104894

Keywords

Deep learning; Crack segmentation; Transformer; Complex detection scenarios; Model generalization capability

Ask authors/readers for more resources

This study proposes a dual-encoder network called DTrC-Net, which combines transformers and convolutional neural networks, to address the challenges in crack segmentation caused by complex scenes. The DTrC-Net captures both local features and global contextual information of crack images and enhances feature fusion between adjacent and codec layers. Experimental results show that DTrC-Net achieves better predictions than other segmentation networks, with high accuracy (75.60%), recall (78.86%), F1-score (76.44%), and intersection over union (64.30%) on the Crack3238 dataset. It also achieves a fast processing speed of 78 frames per second.
The performance of crack segmentation is influenced by complex scenes, including irregularly shaped cracks, complex image backgrounds, and limitations in acquiring global contextual information. To alleviate the in-fluence of these factors, a dual-encoder network fusing transformers and convolutional neural networks (DTrC-Net) is proposed in this study. The structure of the DTrC-Net was designed to capture both the local features and global contextual information of crack images. To enhance feature fusion between the adjacent and codec layers, a feature fusion module and a residual path module were also added to the network. Through a series of comparative experiments, DTrC-Net was found to generate better predictions than other state-of-the-art seg-mentation networks, with the highest precision (75.60%), recall (78.86%), F1-score (76.44%), and intersection over union (64.30%) on the Crack3238 dataset. Moreover, a fast processing speed of 78 frames per second was achieved using the DTrC-Net with an image size of 256 x 256 pixels. Overall, it was found that the proposed DTrC-Net outperformed other advanced networks in terms of accuracy in crack segmentation and demonstrated superior generalizability in complex scenes.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available