期刊
BIG DATA II: LEARNING, ANALYTICS, AND APPLICATIONS
卷 11395, 期 -, 页码 -出版社
SPIE-INT SOC OPTICAL ENGINEERING
DOI: 10.1117/12.2558115
关键词
Aerial imagery; fusion; multi-modal sensing; vehicle detection; YOLOv3
资金
- National Geospatial-Intelligence Agency [HM04761912014]
- U.S. Department of Defense (DOD) [HM04761912014] Funding Source: U.S. Department of Defense (DOD)
Target detection is an important problem in remote-sensing with crucial applications in law-enforcement, military and security surveillance, search-and-rescue operations, and air traffic control, among others. Owing to the recently increased availability of computational resources, deep-learning based methods have demonstrated state-of-the-art performance in target detection from unimodal aerial imagery. In addition, owing to the availability of remote-sensing data from various imaging modalities, such as RGB, infrared, hyper-spectral, multi-spectral, synthetic aperture radar, and lidar, researchers have focused on leveraging the complementary information offered by these various modalities. Over the past few years, deep-learning methods have demonstrated enhanced performance using multi-modal data. In this work, we propose a method for vehicle detection from multi-modal aerial imagery, by means of a modified YOLOv3 deep neural network that conducts mid-level fusion. To the best of our knowledge, the proposed mid-level fusion architecture is the first of its kind to be used for vehicle detection from multi-modal aerial imagery using a hierarchical object detection network. Our experimental studies corroborate the advantages of the proposed method.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据