4.7 Article

Geographical Knowledge-Driven Representation Learning for Remote Sensing Images

Journal

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TGRS.2021.3115569

Keywords

Remote sensing; Task analysis; Satellites; Sensors; Semantics; Annotations; Training; Cloud; snow detection; object detection; remote sensing images; representation learning; scene classification; semantic segmentation

Funding

  1. National Key Research and Development Program of China [2019YFC1510905]
  2. National Natural Science Foundation of China [62125102]
  3. Beijing Natural Science Foundation [4192034]

Ask authors/readers for more resources

The paper introduces a geographical knowledge-driven representation learning method for remote sensing images, improving network performance and reducing the need for annotated data. By using global land cover products and geographical location as supervision, an efficient pretraining framework is proposed to eliminate supervision noises.
The proliferation of remote sensing satellites has resulted in a massive amount of remote sensing images. However, due to human and material resource constraints, the vast majority of remote sensing images remain unlabeled. As a result, it cannot be applied to currently available deep learning methods. To fully utilize the remaining unlabeled images, we propose a Geographical Knowledge-driven Representation (GeoKR) learning method for remote sensing images, improving network performance and reduce the demand for annotated data. The global land cover products and geographical location associated with each remote sensing image are regarded as geographical knowledge to provide supervision for representation learning and network pretraining. An efficient pretraining framework is proposed to eliminate the supervision noises caused by imaging times and resolutions difference between remote sensing images and geographical knowledge. A large-scale pretraining dataset Levir-KR is constructed to support network pretraining. It contains 1,431,950 remote sensing images from Gaofen series satellites with various resolutions. Experimental results demonstrate that our proposed method outperforms ImageNet pretraining and self-supervised representation learning methods and significantly reduces the burden of data annotation on downstream tasks, such as scene classification, semantic segmentation, object detection, and cloud/snow detection. It demonstrates that our proposed method can be used as a novel paradigm for pretraining neural networks. Codes will be available on https://github.com/flyakon/Geographical-Knowledge-driven-Representaion-Learning.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available