4.7 Article

GCFnet: Global Collaborative Fusion Network for Multispectral and Panchromatic Image Classification

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TGRS.2022.3215020

关键词

Feature extraction; Remote sensing; Collaboration; Training; Spatial resolution; Image classification; Decoding; Classification; deep learning (DL); feature fusion; global collaborative fusion; remote sensing

资金

  1. National Key Research and Development Program of China [2018YFB0505000]
  2. National Natural Science Foundation of China [42071324, 42001387]
  3. Shanghai Rising-Star Program [21QA1409100]

向作者/读者索取更多资源

In this article, a novel global collaborative fusion network (GCFnet) is proposed for joint classification of multispectral and panchromatic images. The GCFnet utilizes an encoder-decoder deep learning network to exploit context dependencies in the image. Experimental results demonstrate that the proposed GCFnet outperforms existing methods in terms of accuracy and robustness.
Among various multimodal remote sensing data, the pairing of multispectral (MS) and panchromatic (PAN) images is widely used in remote sensing applications. This article proposes a novel global collaborative fusion network (GCFnet) for joint classification of MS and PAN images. In particular, a global patch-free classification scheme based on an encoder-decoder deep learning (DL) network is developed to exploit context dependencies in the image. The proposed GCFnet is designed based on a novel collaborative fusion architecture, which mainly contains three parts: 1) two shallow-to-deep feature fusion branches related to individual MS and PAN images; 2) a multiscale cross-modal feature fusion branch of the two images, where an adaptive loss weighted fusion strategy is designed to calculate the total loss of two individual and the cross-modal branches; and 3) a probability weighted decision fusion strategy for the fusion of the classification results of three branches to further improve the classification performance. Experimental results obtained on three real datasets covering complex urban scenarios confirm the effectiveness of the proposed GCFnet in terms of higher accuracy and robustness compared to existing methods. By utilizing both sampled and non-sampled position data in the feature extraction process, the proposed GCFnet can achieve excellent performance even in a small sample-size case.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据