4.7 Article

Pixel-Level Remote Sensing Image Recognition Based on Bidirectional Word Vectors

Journal

IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING
Volume 58, Issue 2, Pages 1281-1293

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TGRS.2019.2945591

Keywords

Feature extraction; Remote sensing; Recurrent neural networks; Image recognition; Semantics; Deep learning; Image color analysis; Attention mechanism; bidirectional independent recurrent neural network (BiIndRNN); bidirectional word vector; graph convolutional networks (GCNs); parallel joint algorithm; sliced recurrent neural network (SRNN)

Funding

  1. Xinjiang Uygur Autonomous Region Natural Science Fund Project [2016D01C050]
  2. Xinjiang Autonomous Region Science and Technology Talents Training Project [QN2016YX0051]

Ask authors/readers for more resources

In the traditional remote sensing image recognition, the traditional features (e.g., color features and texture features) cannot fully describe complex images, and the relationships between image pixels cannot be captured well. Using a single model or a traditional sequential joint model, it is easy to lose deep features during feature mining. This article proposes a new feature extraction method that uses the word embedding method from natural language processing to generate bidirectional real dense vectors to reflect the contextual relationships between the pixels. A bidirectional independent recurrent neural network (BiIndRNN) is combined with a convolutional neural network (CNN) to improve the sliced recurrent neural network (SRNN) algorithm model, which is then constructed in parallel with graph convolutional networks (GCNs) under an attention mechanism to fully exploit the deep features of images and to capture the semantic information of the context. This model is collectively named an improved SRNN and attention-treated GCN-based parallel (SAGP) model. Experiments conducted on Populus euphratica forests demonstrate that the proposed method outperforms traditional methods in terms of recognition accuracy. The validation done on public data set also proved it.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available