4.6 Article

Evaluating generative adversarial networks based image-level domain transfer for multi-source remote sensing image segmentation and object detection

期刊

INTERNATIONAL JOURNAL OF REMOTE SENSING
卷 41, 期 19, 页码 7327-7351

出版社

TAYLOR & FRANCIS LTD
DOI: 10.1080/01431161.2020.1757782

关键词

-

向作者/读者索取更多资源

Appearances and qualities of remote sensing images are affected by different atmospheric conditions, quality of sensors, and radiometric calibrations. This heavily challenges the generalization ability of a deep learning or other machine learning model: the performance of a model pretrained on a source remote sensing data set can significantly decrease when applied to a different target data set. The popular generative adversarial networks (GANs) can realize style or appearance transfer between a source and target data sets, which may boost the performance of a deep learning model through generating new target images similar to source samples. In this study, we comprehensively evaluate the performance of GAN-based image-level transfer methods on convolutional neural network (CNN) based image processing models that are trained on one dataset and tested on another one. Firstly, we designed the framework for the evaluation process. The framework consists of two main parts, the GAN-based image-level domain adaptation, which transfers a target image to a new image with similar probability distribution of source image space, and the CNN-based image processing tasks, which are used to test the effects of GAN-based domain adaptation. Second, the domain adaptation is implemented with two mainstream GAN methods for style transfer, the CycleGAN and the AgGAN. The image processing contains two major tasks, segmentation and object detection. The former and the latter are designed based on the widely applied U-Net and Faster R-CNN, respectively. Finally, three experiments, associated with three datasets, are designed to cover different application cases, a change detection case where temporal data is collected from the same scene, a two-city case where images are collected from different regions and a two-sensor case where images are obtained from aerial and satellite platforms respectively. Results revealed that, the GAN-based image transfer can significantly boost the performance of the segmentation model in the change detection case, however, it did not surpass conventional methods; in the other two cases, the GAN-based methods obtained worse results. In object detection, almost all the methods failed to boost the performance of the Faster R-CNN and the GAN-based methods performed the worst.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据