4.7 Article

Light-YOLOv4: An Edge-Device Oriented Target Detection Method for Remote Sensing Images

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JSTARS.2021.3120009

Keywords

Object detection; Image edge detection; Training; Remote sensing; Detectors; Feature extraction; Quantization (signal); Edge device; model compression; NVIDIA Jetson TX2; remote sensing; target detection; YOLOv4

Funding

  1. National Natural Science Foundation of China [62001480]

Ask authors/readers for more resources

This article proposes a lightweight detector named Light-YOLOv4 through model compression, significantly reducing model size and improving detection speed while maintaining detection accuracy. Experiments on edge devices show that Light-YOLOv4 reduces model size, parameter size, and FLOPs by 98.63%, 98.66%, and 91.30% respectively compared to YOLOv4, with a 4.2x increase in detection speed and only slight decrease in detection accuracy.
Most deep-learning-based target detection methods have high computational complexity and memory consumption, and they are difficult to deploy on edge devices with limited computing resources and memory. To tackle this problem, this article proposes to learn a lightweight detector named Light-YOLOv4, and it is obtained from YOLOv4 through model compression. To this end, first, we perform sparsity training by applying L1 regularization to the channel scaling factors, so the less important channels and layers can be found. Second, channel pruning and layer pruning are enforced on the network to prune the less important parts, which could significantly reduce network's width and depth. Third, the pruned model is retrained with a knowledge distillation method to improve the detection accuracy. Fourth, the model is quantized from FP32 to FP16, and it could further accelerate the model with almost no loss of detection accuracy. Finally, to evaluate Light-YOLOv4's performance on edge devices, Light-YOLOv4 is deployed on NVIDIA Jetson TX2. Experiments on the SAR ship detection dataset (SSDD) demonstrate that the model size, parameter size, and FLOPs of Light-YOLOv4 have been reduced by 98.63%, 98.66%, and 91.30% compared with YOLOv4, and the detection speed has been increased to 4.2x. While the detection accuracy of Light-YOLOv4 is only slightly reduced, for example, the mAP has only reduced by 0.013. Besides, experiments on the Gaofen Airplane dataset also prove the feasibility of Light-YOLOv4. Moreover, compared with other state-of-the-art methods, such as SSD and FPN, Light-YOLOv4 is more suitable for edge devices.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available