4.6 Article

Evaluating generative adversarial networks based image-level domain transfer for multi-source remote sensing image segmentation and object detection

Journal

INTERNATIONAL JOURNAL OF REMOTE SENSING
Volume 41, Issue 19, Pages 7327-7351

Publisher

TAYLOR & FRANCIS LTD
DOI: 10.1080/01431161.2020.1757782

Keywords

-

Ask authors/readers for more resources

Appearances and qualities of remote sensing images are affected by different atmospheric conditions, quality of sensors, and radiometric calibrations. This heavily challenges the generalization ability of a deep learning or other machine learning model: the performance of a model pretrained on a source remote sensing data set can significantly decrease when applied to a different target data set. The popular generative adversarial networks (GANs) can realize style or appearance transfer between a source and target data sets, which may boost the performance of a deep learning model through generating new target images similar to source samples. In this study, we comprehensively evaluate the performance of GAN-based image-level transfer methods on convolutional neural network (CNN) based image processing models that are trained on one dataset and tested on another one. Firstly, we designed the framework for the evaluation process. The framework consists of two main parts, the GAN-based image-level domain adaptation, which transfers a target image to a new image with similar probability distribution of source image space, and the CNN-based image processing tasks, which are used to test the effects of GAN-based domain adaptation. Second, the domain adaptation is implemented with two mainstream GAN methods for style transfer, the CycleGAN and the AgGAN. The image processing contains two major tasks, segmentation and object detection. The former and the latter are designed based on the widely applied U-Net and Faster R-CNN, respectively. Finally, three experiments, associated with three datasets, are designed to cover different application cases, a change detection case where temporal data is collected from the same scene, a two-city case where images are collected from different regions and a two-sensor case where images are obtained from aerial and satellite platforms respectively. Results revealed that, the GAN-based image transfer can significantly boost the performance of the segmentation model in the change detection case, however, it did not surpass conventional methods; in the other two cases, the GAN-based methods obtained worse results. In object detection, almost all the methods failed to boost the performance of the Faster R-CNN and the GAN-based methods performed the worst.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available