4.7 Article

A multi-level context-guided classification method with object-based convolutional neural network for land cover classification using very high resolution remote sensing images

Publisher

ELSEVIER
DOI: 10.1016/j.jag.2020.102086

Keywords

VHR image; Object-based image classification; Remote sensing classification; Convolutional neural network; Deep learning

Categories

Funding

  1. Major State Research Development Program of China [2017YFB0504103]
  2. National Natural Science Foundation of China [41722109, 61825103, 91738302]
  3. Hubei Provincial Natural Science Foundation of China [2018CFA053]
  4. Wuhan Yellow Crane Talents (Science) Program

Ask authors/readers for more resources

Classification of very high resolution imagery (VHRI) is challenging due to the difficulty in mining complex spatial and spectral patterns from rich image details. Various object-based Convolutional Neural Networks (OCNN) for VHRI classification have been proposed to overcome the drawbacks of the redundant pixel-wise CNNs, owing to their low computational cost and fine contour-preserving. However, classification performance of OCNN is still limited by geometric distortions, insufficient feature representation, and lack of contextual guidance. In this paper, an innovative multi-level context-guided classification method with the OCNN (MLCG-OCNN) is proposed. A feature-fusing OCNN, including the object contour-preserving mask strategy with the supplement of object deformation coefficient, is developed for accurate object discrimination by learning simultaneously high-level features from independent spectral patterns, geometric characteristics, and object-level contextual information. Then pixel-level contextual guidance is used to further improve the per-object classification results. The MLCG-OCNN method is intentionally tested on two validated small image datasets with limited training samples, to assess the performance in applications of land cover classification where a trade-off between time-consumption of sample training and overall accuracy needs to be found, as it is very common in the practice. Compared with traditional benchmark methods including the patch-based per-pixel CNN (PBPP), the patch-based per-object CNN (PBPO), the pixel-wise CNN with object segmentation refinement (PO), semantic segmentation U-Net (U-NET), and DeepLabV3+(DLV3+), MLCG-OCNN method achieves remarkable classification performance (> 80 %). Compared with the state-of-the-art architecture DeepLabV3+, the MLCG-OCNN method demonstrates high computational efficiency for VHRI classification (4-5 times faster).

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available