4.7 Article

A Global-Local Self-Adaptive Network for Drone-View Object Detection

期刊

IEEE TRANSACTIONS ON IMAGE PROCESSING
卷 30, 期 -, 页码 1556-1569

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIP.2020.3045636

关键词

Detectors; Object detection; Training; Training data; Proposals; Feature extraction; Convolution; Drone-view object detection; tiny-scale object detection; object detection in crowded regions; coarse-to-fine adaptive detector

资金

  1. National Key Research and Development Program of China [2018YFB1700603]
  2. National Natural Science Foundation of China [61672077, 61532002]
  3. Beijing Natural Science Foundation-Haidian Primitive Innovation Joint Fund [L182016]
  4. National Science Foundation of USA [IIS-0949467, IIS-1047715, IIS-1715985, IIS-1049448]

向作者/读者索取更多资源

Object detection has seen significant improvement in performance with the use of deep learning methods, but drone-view object detection remains challenging due to tiny-scale objects and uneven object distribution. This paper proposes a global-local self-adaptive network (GLSAN) to address these challenges, incorporating a global-local fusion strategy and adaptively refining detection through a progressive scale-varying network. The SARSA algorithm dynamically crops crowded regions in input images, while the LSRN enlarges cropped images for finer-scale feature extraction, contributing to data augmentation and enhancing the detector's robustness.
Directly benefiting from the deep learning methods, object detection has witnessed a great performance boost in recent years. However, drone-view object detection remains challenging for two main reasons: (1) Objects of tiny-scale with more blurs w.r.t. ground-view objects offer less valuable information towards accurate and robust detection; (2) The unevenly distributed objects make the detection inefficient, especially for regions occupied by crowded objects. Confronting such challenges, we propose an end-to-end global-local self-adaptive network (GLSAN) in this paper. The key components in our GLSAN include a global-local detection network (GLDN), a simple yet efficient self-adaptive region selecting algorithm (SARSA), and a local super-resolution network (LSRN). We integrate a global-local fusion strategy into a progressive scale-varying network to perform more precise detection, where the local fine detector can adaptively refine the target's bounding boxes detected by the global coarse detector via cropping the original images for higher-resolution detection. The SARSA can dynamically crop the crowded regions in the input images, which is unsupervised and can be easily plugged into the networks. Additionally, we train the LSRN to enlarge the cropped images, providing more detailed information for finer-scale feature extraction, helping the detector distinguish foreground and background more easily. The SARSA and LSRN also contribute to data augmentation towards network training, which makes the detector more robust. Extensive experiments and comprehensive evaluations on the VisDrone2019-DET benchmark dataset and UAVDT dataset demonstrate the effectiveness and adaptivity of our method. Towards an industrial application, our network is also applied to a DroneBolts dataset with proven advantages. Our source codes have been available at https://github.com/dengsutao/glsan.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据