3.8 Proceedings Paper

On The Exploration of Vision Transformers in Remote Sensing Building Extraction

Journal

Publisher

IEEE
DOI: 10.1109/ISM55400.2022.00046

Keywords

Remote Sensing; Transformers; building; extraction; segmentation

Funding

  1. Horizon 2020, the European Union's Programme for Research and Innovation [870373-SnapEarth]

Ask authors/readers for more resources

This study compares different Transformer-based semantic segmentation architectures to evaluate their predictive performance and computational efficiency in extracting building footprints from remote sensing imagery. Four new architectures are introduced and compared with existing baselines.
Extracting building information using artificial sources from satellite and remote sensing data has become a valuable tool for a variety of applications such as, damage detection, infrastructure construction, land use management, and building energy consumption estimation. Recently, deep learning methods have made much progress in extracting building footprints from remote sensing (RS) imagery but many challenges persist. Convolutional Neural Networks (CNN) have been the fundamental way to extract segments of buildings, but they are not able to capture accurately the global connectivity of representations. To overcome this boundary, researchers proposed Vision Transformers which achieved state of the art accuracy in computer vision tasks [1]. Especially in building extraction from RS imagery several architectures that based on Transformers have been proposed lately. However the experimental scenarios make them difficult to compare and extract meaningful conclusions. Considering this, the current manuscript presents an analytical comparison between diverse Transformer-based semantic segmentation architectures, aiming to observe their predictive performance and computational efficiency in three building footprint extraction RS imagery datasets. Moreover, this work introduces four new architectures which are extensively compared with literature baselines.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available