4.7 Article

DeepCrack: Learning Hierarchical Convolutional Features for Crack Detection

Journal

IEEE TRANSACTIONS ON IMAGE PROCESSING
Volume 28, Issue 3, Pages 1498-1512

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIP.2018.2878966

Keywords

Line detection; edge detection; contour grouping; crack detection; convolutional neural network

Funding

  1. National Natural Science Foundation of China [61872277, 61301277, 91546106]
  2. National Key Research and Development Program of China [2016YFB0502203]
  3. Hubei Provincial Natural Science Foundation [2018CFB482]

Ask authors/readers for more resources

Cracks are typical line structures that are of interest in many computer-vision applications. In practice, many cracks, e.g., pavement cracks, show poor continuity and low contrast, which bring great challenges to image-based crack detection by using low-level features. In this paper, we propose DeepCrack-an end-to-end trainable deep convolutional neural network for automatic crack detection by learning high-level features for crack representation. In this method, multi-scale deep convolutional features learned at hierarchical convolutional stages are fused together to capture the line structures. More detailed representations are made in larger scale feature maps and more holistic representations are made in smaller scale feature maps. We build DeepCrack net on the encoder-decoder architecture of SegNet and pairwisely fuse the convolutional features generated in the encoder network and in the decoder network at the same scale. We train DeepCrack net on one crack dataset and evaluate it on three others. The experimental results demonstrate that DeepCrack achieves F-measure over 0.87 on the three challenging datasets in average and outperforms the current state-of-the-art methods.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available