4.7 Article

Cross-Scale Feature Fusion for Object Detection in Optical Remote Sensing Images

Journal

IEEE GEOSCIENCE AND REMOTE SENSING LETTERS
Volume 18, Issue 3, Pages 431-435

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/LGRS.2020.2975541

Keywords

Object detection; Feature extraction; Remote sensing; Task analysis; Semantics; Optical imaging; Optical sensors; Convolutional neural networks (CNNs); cross-scale feature fusion (CSFF); object detection; remote sensing images

Funding

  1. Science, Technology and Innovation Commission of Shenzhen Municipality [JCYJ20180306171131643]
  2. Seed Foundation of Innovation and Creation for Graduate Students in Northwestern Polytechnical University (NWPU) [ZZ2019026]
  3. National Science Foundation of China [61772425, 61773315, 61790552]
  4. Aerospace Science Foundation of China [2017ZC53032]
  5. Fundamental Research Funds for the Central Universities [3102019AX09]

Ask authors/readers for more resources

There are many groundbreaking object detection frameworks used in natural scene images, but applying them directly to remote sensing images is not very effective. This paper proposes an end-to-end cross-scale feature fusion (CSFF) framework to address the challenges of object detection in optical remote sensing images, achieving a 3.0% improvement in mAP on the DIOR dataset compared to Faster R-CNN with FPN.
For the time being, there are many groundbreaking object detection frameworks used in natural scene images. These algorithms have good detection performance on the data sets of open natural scenes. However, applying these frameworks to remote sensing images directly is not very effective. The existing deep-learning-based object detection algorithms still face some challenges when dealing with remote sensing images because these images usually contain a number of targets with large variations of object sizes as well as interclass similarity. Aiming at the challenges of object detection in optical remote sensing images, we propose an end-to-end cross-scale feature fusion (CSFF) framework, which can effectively improve the object detection accuracy. Specifically, we first use a feature pyramid network (FPN) to obtain multilevel feature maps and then insert a squeeze and excitation (SE) block into the top layer to model the relationship between different feature channels. Next, we use the CSFF module to obtain powerful and discriminative multilevel feature representations. Finally, we implement our work in the framework of Faster region-based CNN (R-CNN). In the experiment, we evaluate our method on a publicly available large-scale data set, named DIOR, and obtain an improvement of 3.0% measured in terms of mAP compared with Faster R-CNN with FPN.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available