4.7 Article

Convolutional Neural Networks for Multimodal Remote Sensing Data Classification

Journal

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TGRS.2021.3124913

Keywords

Feature extraction; Laser radar; Synthetic aperture radar; Task analysis; Convolutional neural networks; Hyperspectral imaging; Network architecture; Classification; convolutional neural networks (CNNs); cross-channel; hyperspectral (HS); light detection and ranging (LiDAR); multimodal; reconstruction; remote sensing (RS); synthetic aperture radar (SAR)

Funding

  1. National Natural Science Foundation of China [62101045]
  2. China Postdoctoral Science Foundation [2021M690385]
  3. MIAI@Grenoble Alpes [ANR19-P3IA-0003]
  4. AXA Research Fund

Ask authors/readers for more resources

This paper proposes a new framework for multimodal remote sensing data classification, using deep learning and a cross-channel reconstruction module to learn compact fusion representations of different data sources. Extensive experiments on two multimodal RS datasets demonstrate the effectiveness and superiority of the proposed method.
In recent years, enormous research has been made to improve the classification performance of single-modal remote sensing (RS) data. However, with the ever-growing availability of RS data acquired from satellite or airborne platforms, simultaneous processing and analysis of multimodal RS data pose a new challenge to researchers in the RS community. To this end, we propose a deep-learning-based new framework for multimodal RS data classification, where convolutional neural networks (CNNs) are taken as a backbone with an advanced cross-channel reconstruction module, called CCR-Net. As the name suggests, CCR-Net learns more compact fusion representations of different RS data sources by the means of the reconstruction strategy across modalities that can mutually exchange information in a more effective way. Extensive experiments conducted on two multimodal RS datasets, including hyperspectral (HS) and light detection and ranging (LiDAR) data, i.e., the Houston2013 dataset, and HS and synthetic aperture radar (SAR) data, i.e., the Berlin dataset, demonstrate the effectiveness and superiority of the proposed CCR-Net in comparison with several state-of-the-art multimodal RS data classification methods. The codes will be openly and freely available at https://github.com/danfenghong/IEEE_TGRS_CCR-Net for the sake of reproducibility.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available