Journal
NEUROCOMPUTING
Volume 338, Issue -, Pages 139-153Publisher
ELSEVIER
DOI: 10.1016/j.neucom.2019.01.036
Keywords
Convolutional neural network; Crack detection; Semantic segmentation; Hierarchical convolutional features; Guided filtering; Crack detection dataset
Categories
Funding
- National Natural Science Foundation of China [41571436]
- National Key Research and Development Program of China [2017YFB1302400]
- Hubei Province Science and Technology Support Program, China [2015BAA027]
Ask authors/readers for more resources
Automatic crack detection from images of various scenes is a useful and challenging task in practice. In this paper, we propose a deep hierarchical convolutional neural network (CNN), called as DeepCrack, to predict pixel-wise crack segmentation in an end-to-end method. DeepCrack consists of the extended Fully Convolutional Networks (FCN) and the Deeply-Supervised Nets (DSN). During the training, the elaborately designed model learns and aggregates multi-scale and multi-level features from the low convolutional layers to the high-level convolutional layers, which is different from the standard approaches of only using the last convolutional layer. DSN provides integrated direct supervision for features of each convolutional stage. We apply both guided filtering and Conditional Random Fields (CRFs) methods to refine the final prediction results. A benchmark dataset consisting of 537 images with manual annotation maps are built to verify the effectiveness of our proposed method. Our method achieved state-of-the-art performances on the proposed dataset (mean I/U of 85.9, best F-score of 86.5, and 0.1 s per image). (c) 2019 Elsevier B.V. All rights reserved.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available