4.1 Article

Pixel-Level Recognition of Pavement Distresses Based on U-Net

Journal

Publisher

HINDAWI LTD
DOI: 10.1155/2021/5586615

Keywords

-

Ask authors/readers for more resources

This study has successfully developed an automatic pixel-level image recognition model to reduce manual labor in collecting road maintenance data. By combining different neural networks and training approaches, the model's reliability is validated through testing, showing good performance in identifying pavement distresses.
This study develops and tests an automatic pixel-level image recognition model to reduce the amount of manual labor required to collect data for road maintenance. Firstly, images of six kinds of pavement distresses, namely, transverse cracks, longitudinal cracks, alligator cracks, block cracks, potholes, and patches, are collected from four asphalt highways in three provinces in China to build a labeled pixel-level dataset containing 10,097 images. Secondly, the U-net model, one of the most advanced deep neural networks for image segmentation, is combined with the ResNet neural network as the basic classification network to recognize distressed areas in the images. Data augmentation, batch normalization, momentum, transfer learning, and discriminative learning rates are used to train the model. Thirdly, the trained models are validated on the test dataset, and the results of experiments show the following: if the types of pavement distresses are not distinguished, the pixel accuracy (PA) values of the recognition models using ResNet-34 and ResNet-50 as basic classification networks are 97.336% and 95.772%, respectively, on the validation set. When the types of distresses are distinguished, the PA values of models using the two classification networks are 66.103% and 44.953%, respectively. For the model using ResNet-34, the category pixel accuracy (CPA) and intersection over union (IoU) of the identification of areas with no distress are 99.276% and 99.059%, respectively. For areas featuring distresses in the images, the CPA and IoU of the model are the highest for the identification of patches, at 82.774% and 73.778%, and are the lowest for alligator cracks, at 14.077% and 12.581%, respectively.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.1
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available