4.7 Article

Self-supervision assisted multimodal remote sensing image classification with coupled self-looping convolution networks

期刊

NEURAL NETWORKS
卷 164, 期 -, 页码 1-20

出版社

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.neunet.2023.04.019

关键词

Cross-modal self-supervision; Hyperspectral images; Image classification; Convolutional neural networks; Coupled self-looping networks

向作者/读者索取更多资源

Recently, there has been an increase in the use of multimodal data in the remote sensing community for various tasks. Deep learning techniques are being used for multimodal data fusion, but they face challenges such as limited feature extraction capability, high labelled data requirement, and lack of cross-modal interaction. To address these challenges, a novel self-supervision oriented method of multimodal remote sensing data fusion is proposed. The approach involves solving a self-supervised auxiliary task, implementing convolutions in both backward and forward directions, and incorporating cross-modal communication.
Recently, remote sensing community has seen a surge in the use of multimodal data for different tasks such as land cover classification, change detection and many more. However, handling multimodal data requires synergistically using the information from different sources. Currently, deep learning (DL) techniques are being religiously used in multimodal data fusion owing to their superior feature extraction capabilities. But, DL techniques have their share of challenges. Firstly, DL models are mostly constructed in the forward fashion limiting their feature extraction capability. Secondly, multimodal learning is generally addressed in a supervised setting, which leads to high labelled data requirement. Thirdly, the models generally handle each modality separately, thus preventing any cross-modal interaction. Hence, we propose a novel self-supervision oriented method of multimodal remote sensing data fusion. For effective cross-modal learning, our model solves a self-supervised auxiliary task to reconstruct input features of one modality from the extracted features of another modality, thus enabling more representative pre-fusion features. To counter the forward architecture, our model is composed of convolutions both in backward and forward directions, thus creating self-looping connections, leading to a self-correcting framework. To facilitate cross-modal communication, we have incorporated coupling across modality-specific extractors using shared parameters. We evaluate our approach on three remote sensing datasets, namely Houston 2013 and Houston 2018, which are HSI-LiDAR datasets and TU Berlin, which is an HSI-SAR dataset, where we achieve the respective accuracy of 93.08%, 84.59% and 73.21%, thus beating the state of the art by a minimum of 3.02%, 2.23% and 2.84%. (c) 2023 Elsevier Ltd. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据