3.8 Proceedings Paper

Vehicle Detection from Multi-modal Aerial Imagery using YOLOv3 with Mid-level Fusion

Journal

Publisher

SPIE-INT SOC OPTICAL ENGINEERING
DOI: 10.1117/12.2558115

Keywords

Aerial imagery; fusion; multi-modal sensing; vehicle detection; YOLOv3

Funding

  1. National Geospatial-Intelligence Agency [HM04761912014]
  2. U.S. Department of Defense (DOD) [HM04761912014] Funding Source: U.S. Department of Defense (DOD)

Ask authors/readers for more resources

Target detection is an important problem in remote-sensing with crucial applications in law-enforcement, military and security surveillance, search-and-rescue operations, and air traffic control, among others. Owing to the recently increased availability of computational resources, deep-learning based methods have demonstrated state-of-the-art performance in target detection from unimodal aerial imagery. In addition, owing to the availability of remote-sensing data from various imaging modalities, such as RGB, infrared, hyper-spectral, multi-spectral, synthetic aperture radar, and lidar, researchers have focused on leveraging the complementary information offered by these various modalities. Over the past few years, deep-learning methods have demonstrated enhanced performance using multi-modal data. In this work, we propose a method for vehicle detection from multi-modal aerial imagery, by means of a modified YOLOv3 deep neural network that conducts mid-level fusion. To the best of our knowledge, the proposed mid-level fusion architecture is the first of its kind to be used for vehicle detection from multi-modal aerial imagery using a hierarchical object detection network. Our experimental studies corroborate the advantages of the proposed method.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available