4.7 Review

Deep learning in multimodal remote sensing data fusion: A comprehensive review

Publisher

ELSEVIER
DOI: 10.1016/j.jag.2022.102926

Keywords

Artificial intelligence; Data fusion; Deep learning; Multimodal; Remote sensing

Categories

Funding

  1. National Key Research and De-velopment Program of China [2021YFB3900502]
  2. National Natural Science Foundation of China [42030111]
  3. MIAI@Grenoble Alpes [ANR-19-P3IA-0003]
  4. AXA Research Fund

Ask authors/readers for more resources

This survey provides a systematic overview of deep learning-based multimodal remote sensing data fusion. It first introduces essential knowledge in this field, then conducts a literature survey to analyze the trends. It reviews prevalent sub-fields based on different data modalities, and finally collects and summarizes valuable resources and highlights remaining challenges.
With the extremely rapid advances in remote sensing (RS) technology, a great quantity of Earth observation (EO) data featuring considerable and complicated heterogeneity are readily available nowadays, which renders researchers an opportunity to tackle current geoscience applications in a fresh way. With the joint utilization of EO data, much research on multimodal RS data fusion has made tremendous progress in recent years, yet these developed traditional algorithms inevitably meet the performance bottleneck due to the lack of the ability to comprehensively analyze and interpret strongly heterogeneous data. Hence, this non-negligible limitation further arouses an intense demand for an alternative tool with powerful processing competence. Deep learning (DL), as a cutting-edge technology, has witnessed remarkable breakthroughs in numerous computer vision tasks owing to its impressive ability in data representation and reconstruction. Naturally, it has been successfully applied to the field of multimodal RS data fusion, yielding great improvement compared with traditional methods. This survey aims to present a systematic overview in DL-based multimodal RS data fusion. More specifically, some essential knowledge about this topic is first given. Subsequently, a literature survey is conducted to analyze the trends of this field. Some prevalent sub-fields in the multimodal RS data fusion are then reviewed in terms of the to-be-fused data modalities, i.e., spatiospectral, spatiotemporal, light detection and ranging-optical, synthetic aperture radar-optical, and RS-Geospatial Big Data fusion. Furthermore, We collect and summarize some valuable resources for the sake of the development in multimodal RS data fusion. Finally, the remaining challenges and potential future directions are highlighted.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available