3.8 Proceedings Paper

Vehicle Detection from Multi-modal Aerial Imagery using YOLOv3 with Mid-level Fusion

出版社

SPIE-INT SOC OPTICAL ENGINEERING
DOI: 10.1117/12.2558115

关键词

Aerial imagery; fusion; multi-modal sensing; vehicle detection; YOLOv3

资金

  1. National Geospatial-Intelligence Agency [HM04761912014]
  2. U.S. Department of Defense (DOD) [HM04761912014] Funding Source: U.S. Department of Defense (DOD)

向作者/读者索取更多资源

Target detection is an important problem in remote-sensing with crucial applications in law-enforcement, military and security surveillance, search-and-rescue operations, and air traffic control, among others. Owing to the recently increased availability of computational resources, deep-learning based methods have demonstrated state-of-the-art performance in target detection from unimodal aerial imagery. In addition, owing to the availability of remote-sensing data from various imaging modalities, such as RGB, infrared, hyper-spectral, multi-spectral, synthetic aperture radar, and lidar, researchers have focused on leveraging the complementary information offered by these various modalities. Over the past few years, deep-learning methods have demonstrated enhanced performance using multi-modal data. In this work, we propose a method for vehicle detection from multi-modal aerial imagery, by means of a modified YOLOv3 deep neural network that conducts mid-level fusion. To the best of our knowledge, the proposed mid-level fusion architecture is the first of its kind to be used for vehicle detection from multi-modal aerial imagery using a hierarchical object detection network. Our experimental studies corroborate the advantages of the proposed method.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据