4.7 Article

Training Deep Convolutional Neural Networks for Land-Cover Classification of High-Resolution Imagery

Journal

IEEE GEOSCIENCE AND REMOTE SENSING LETTERS
Volume 14, Issue 4, Pages 549-553

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/LGRS.2017.2657778

Keywords

Deep convolutional neural network (DCNN); deep learning; high-resolution remote sensing imagery; land-cover classification; transfer learning (TL)

Ask authors/readers for more resources

Deep convolutional neural networks (DCNNs) have recently emerged as a dominant paradigm for machine learning in a variety of domains. However, acquiring a suitably large data set for training DCNN is often a significant challenge. This is a major issue in the remote sensing domain, where we have extremely large collections of satellite and aerial imagery, but lack the rich label information that is often readily available for other image modalities. In this letter, we investigate the use of DCNN for land-cover classification in high-resolution remote sensing imagery. To overcome the lack of massive labeled remote-sensing image data sets, we employ two techniques in conjunction with DCNN: transfer learning (TL) with fine-tuning and data augmentation tailored specifically for remote sensing imagery. TL allows one to bootstrap a DCNN while preserving the deep visual feature extraction learned over an image corpus from a different image domain. Data augmentation exploits various aspects of remote sensing imagery to dramatically expand small training image data sets and improve DCNN robustness for remote sensing image data. Here, we apply these techniques to the well-known UC Merced data set to achieve the land-cover classification accuracies of 97.8 +/- 2.3%, 97.6 +/- 2.6%, and 98.5 +/- 1.4% with CaffeNet, GoogLeNet, and ResNet, respectively.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available