4.7 Article

Spatial Information Considered Network for Scene Classification

Journal

IEEE GEOSCIENCE AND REMOTE SENSING LETTERS
Volume 18, Issue 6, Pages 984-988

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/LGRS.2020.2992929

Keywords

Feature extraction; Visualization; Convolution; Remote sensing; Machine learning; Image resolution; Recurrent neural networks; Convolutional neural networks (CNNs); large-size data set; recurrent neural network (RNN); scene classification; scene spatial relationship; spatial information considered network (SIC-Net)

Funding

  1. National Key Research and Development Projects [2018YFB0504500]
  2. National Natural Science Foundation of China [41771458, 41301453]
  3. Young Elite Scientists Sponsorship Program by Hunan Province of China [2018RS3012]
  4. Hunan Science and Technology Department Innovation Platform Open Fund Project [18K005]

Ask authors/readers for more resources

This study introduces a model considering spatial information, which combines CNN and recurrent neural networks to learn more discriminative features, thus improving the accuracy of remote sensing image scene classification. Experimental results demonstrate that the proposed method outperforms other three state-of-the-art methods on the existing dataset.
Remote sensing image (RSI) scene classification (RSISC) is a fundamental problem for understanding high-resolution RSIs. More recently, deep learning methods, especially convolutional neural networks (CNNs), and large data sets have greatly promoted the RSISC. However, deep learning methods rely heavily on the visual features extracted from the patches cropped from original RSIs, so the intraclass diversity and interclass similarity are two big challenges. To address these problems, in this letter, we propose a spatial information considered model to learn more discriminative features. By combining CNN and recurrent neural network, the proposed method can exploit both local and long-range spatial relation information to enhance the representational ability of the learned features. As the initial visual features of a single patch are transformed into higher-level features with spatial information, the proposed method achieves more accurate scene classification. Besides, we present an RSISC data set named as CSU-RSISC10 data set to preserve the spatial information between scenes in a new way of organization. Experiments demonstrate that the proposed method outperforms other three state-of-the-art methods in scene classification using CSU-RSISC10 data set.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available