4.7 Article

Synthesizing Optical and SAR Imagery From Land Cover Maps and Auxiliary Raster Data

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TGRS.2021.3068532

关键词

Generators; Semantics; Remote sensing; Image synthesis; Radar polarimetry; Image segmentation; Training; Deep learning; generative adversarial network (GAN); image synthesis; synthetic aperture radar (SAR)

资金

  1. Japan Society for the Promotion of Science through KAKENHI [18K18067, 20K19834]
  2. Grants-in-Aid for Scientific Research [18K18067, 20K19834] Funding Source: KAKEN

向作者/读者索取更多资源

This study uses generative adversarial networks (GANs) to synthesize optical RGB and synthetic aperture radar (SAR) remote sensing images from land cover maps and auxiliary raster data. The research finds that including auxiliary information in the synthesis process improves the quality of the generated images and allows for more control over their characteristics.
We synthesize both optical RGB and synthetic aperture radar (SAR) remote sensing images from land cover maps and auxiliary raster data using generative adversarial networks (GANs). In remote sensing, many types of data, such as digital elevation models (DEMs) or precipitation maps, are often not reflected in land cover maps but still influence image content or structure. Including such data in the synthesis process increases the quality of the generated images and exerts more control on their characteristics. Spatially adaptive normalization layers fuse both inputs and are applied to a full-blown generator architecture consisting of encoder and decoder to take full advantage of the information content in the auxiliary raster data. Our method successfully synthesizes medium (10 m) and high (1 m) resolution images when trained with the corresponding data set. We show the advantage of data fusion of land cover maps and auxiliary information using mean intersection over unions (mIoUs), pixel accuracy, and Frechet inception distances (FIDs) using pretrained U-Net segmentation models. Handpicked images exemplify how fusing information avoids ambiguities in the synthesized images. By slightly editing the input, our method can be used to synthesize realistic changes, i.e., raising the water levels. The source code is available at https://github.com/gbaier/rs_img_synth, and we published the newly created high-resolution data set at https://ieee-dataport.org/open-access/geonrw.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据