Journal
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS
Volume 21, Issue 1, Pages 273-284Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TITS.2019.2891167
Keywords
CrackNet; CrackNet-V; deep learning; surface cracks
Categories
Funding
- National Natural Science Foundation of China [U1534203]
Ask authors/readers for more resources
A few recent developments have demonstrated that deep-learning-based solutions can outperform traditional algorithms for automated pavement crack detection. In this paper, an efficient deep network called CrackNet-V is proposed for automated pixel-level crack detection on 3D asphalt pavement images. Compared with the original CrackNet, CrackNet-V has a deeper architecture but fewer parameters, resulting in improved accuracy and computation efficiency. Inspired by CrackNet, CrackNet-V uses invariant spatial size through all layers such that supervised learning can be conducted at pixel level. Following the VGG network, CrackNet-V uses $3\times 3$ size of filters for the first six convolutional layers and stacks several $3\times 3$ convolutional layers together for deep abstraction, resulting in reduced number of parameters and efficient feature extraction. CrackNet-V has 64113 parameters and consists of ten layers, including one pre-process layer, eight convolutional layers, and one output layer. A new activation function leaky rectified tanh is proposed in this paper for higher accuracy in detecting shallow cracks. The training of CrackNet-V was completed after 3000 iterations, which took only one day on a GeForce GTX 1080Ti device. According to the experimental results on 500 testing images, CrackNet-V achieves a high performance with a Precision of 84.31, Recall of 90.12, and an F-1 score of 87.12. It is shown that CrackNet-V yields better overall performance particularly in detecting fine cracks compared with CrackNet. The efficiency of CrackNet-V further reveals the advantages of deep learning techniques for automated pixel-level pavement crack detection.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available