4.7 Article

Cross-city matters: A multimodal remote sensing benchmark dataset for cross-city semantic segmentation using high-resolution domain adaptation networks

Journal

REMOTE SENSING OF ENVIRONMENT
Volume 299, Issue -, Pages -

Publisher

ELSEVIER SCIENCE INC
DOI: 10.1016/j.rse.2023.113856

Keywords

Cross-city; Deep learning; Dice loss; Domain adaptation; High-resolution network; Land cover; Multimodal benchmark datasets; Remote sensing; Segmentation

Ask authors/readers for more resources

This study addresses the performance bottleneck of AI models in single-city environments by constructing a new multimodal remote sensing benchmark dataset and proposing a high-resolution domain adaptation network called HighDAN. HighDAN improves the generalization ability of AI models in multi-city environments and eliminates image representation differences between different cities through adversarial learning. Extensive experiments demonstrate the superiority of HighDAN in terms of segmentation performance and generalization ability.
Artificial intelligence (AI) approaches nowadays have gained remarkable success in single-modality-dominated remote sensing (RS) applications, especially with an emphasis on individual urban environments (e.g., single cities or regions). Yet these AI models tend to meet the performance bottleneck in the case studies across cities or regions, due to the lack of diverse RS information and cutting-edge solutions with high generalization ability. To this end, we build a new set of multimodal remote sensing benchmark datasets (including hyperspectral, mul-tispectral, SAR) for the study purpose of the cross-city semantic segmentation task (called C2Seg dataset), which consists of two cross-city scenes, i.e., Berlin-Augsburg (in Germany) and Beijing-Wuhan (in China). Beyond the single city, we propose a high-resolution domain adaptation network, HighDAN for short, to promote the AI model's generalization ability from the multi-city environments. HighDAN is capable of retaining the spatially topological structure of the studied urban scene well in a parallel high-to-low resolution fusion fashion but also closing the gap derived from enormous differences of RS image representations between different cities by means of adversarial learning. In addition, the Dice loss is considered in HighDAN to alleviate the class imbalance issue caused by factors across cities. Extensive experiments conducted on the C2Seg dataset show the superiority of our HighDAN in terms of segmentation performance and generalization ability, compared to state-of-the-art com-petitors. The C2Seg dataset and the semantic segmentation toolbox (involving the proposed HighDAN) will be available publicly at https://github.com/danfenghong/RSE_Cross-city.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available