4.7 Article

Remote Sensing Image Change Detection With Transformers

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TGRS.2021.3095166

关键词

Semantics; Context modeling; Feature extraction; Computational modeling; Task analysis; Buildings; Radio frequency; Attention mechanism; change detection (CD); convolutional neural networks (CNNs); high-resolution (HR) optical remote sensing (RS) image; transformers

资金

  1. National Key Research and Development Program of China [2019YFC1510905]
  2. National Natural Science Foundation of China [61671037]
  3. Beijing Natural Science Foundation [4192034]

向作者/读者索取更多资源

This study introduces a bitemporal image transformer (BIT) for efficient and effective change detection by modeling contexts in the spatial-temporal domain. The BIT model demonstrates superior performance and efficiency on three CD datasets, significantly outperforming the purely convolutional baseline model with lower computational costs.
Modern change detection (CD) has achieved remarkable success by the powerful discriminative ability of deep convolutions. However, high-resolution remote sensing CD remains challenging due to the complexity of objects in the scene. Objects with the same semantic concept may show distinct spectral characteristics at different times and spatial locations. Most recent CD pipelines using pure convolutions are still struggling to relate long-range concepts in space-time. Nonlocal self-attention approaches show promising performance via modeling dense relationships among pixels, yet are computationally inefficient. Here, we propose a bitemporal image transformer (BIT) to efficiently and effectively model contexts within the spatial-temporal domain. Our intuition is that the high-level concepts of the change of interest can be represented by a few visual words, that is, semantic tokens. To achieve this, we express the bitemporal image into a few tokens and use a transformer encoder to model contexts in the compact token-based space-time. The learned context-rich tokens are then fed back to the pixel-space for refining the original features via a transformer decoder. We incorporate BIT in a deep feature differencing-based CD framework. Extensive experiments on three CD datasets demonstrate the effectiveness and efficiency of the proposed method. Notably, our BIT-based model significantly outperforms the purely convolutional baseline using only three times lower computational costs and model parameters. Based on a naive backbone (ResNet18) without sophisticated structures (e.g., feature pyramid network (FPN) and UNet), our model surpasses several state-of-the-art CD methods, including better than four recent attention-based methods in terms of efficiency and accuracy. Our code is available at https://github.com/justchenhao/BIT_CD.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据