4.7 Article

CF2PN: A Cross-Scale Feature Fusion Pyramid Network Based Remote Sensing Target Detection

Journal

REMOTE SENSING
Volume 13, Issue 5, Pages -

Publisher

MDPI
DOI: 10.3390/rs13050847

Keywords

multi-scale feature fusion pyramid; remote sensing images; single-stage target detection; M2Det; focal loss

Funding

  1. Henan Province Science and Technology Breakthrough Project [212102210102]

Ask authors/readers for more resources

With the advancements in remote sensing technology, the interest in remote sensing target detection has grown. Traditional multi-scale detection methods have limitations and a new approach is needed to tackle the challenges posed by variations in object size.
In the wake of developments in remote sensing, the application of target detection of remote sensing is of increasing interest. Unfortunately, unlike natural image processing, remote sensing image processing involves dealing with large variations in object size, which poses a great challenge to researchers. Although traditional multi-scale detection networks have been successful in solving problems with such large variations, they still have certain limitations: (1) The traditional multi-scale detection methods note the scale of features but ignore the correlation between feature levels. Each feature map is represented by a single layer of the backbone network, and the extracted features are not comprehensive enough. For example, the SSD network uses the features extracted from the backbone network at different scales directly for detection, resulting in the loss of a large amount of contextual information. (2) These methods combine with inherent backbone classification networks to perform detection tasks. RetinaNet is just a combination of the ResNet-101 classification network and FPN network to perform the detection tasks; however, there are differences in object classification and detection tasks. To address these issues, a cross-scale feature fusion pyramid network (CF2PN) is proposed. First and foremost, a cross-scale fusion module (CSFM) is introduced to extract sufficiently comprehensive semantic information from features for performing multi-scale fusion. Moreover, a feature pyramid for target detection utilizing thinning U-shaped modules (TUMs) performs the multi-level fusion of the features. Eventually, a focal loss in the prediction section is used to control the large number of negative samples generated during the feature fusion process. The new architecture of the network proposed in this paper is verified by DIOR and RSOD dataset. The experimental results show that the performance of this method is improved by 2-12% in the DIOR dataset and RSOD dataset compared with the current SOTA target detection methods.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available