4.6 Article

Faster-YOLO: An accurate and faster object detection method

Journal

DIGITAL SIGNAL PROCESSING
Volume 102, Issue -, Pages -

Publisher

ACADEMIC PRESS INC ELSEVIER SCIENCE
DOI: 10.1016/j.dsp.2020.102756

Keywords

Object detection; ELM-LRF; ELM-AE; YOLO; Real-time processing

Funding

  1. National Natural Science Foundation of China [61402368]

Ask authors/readers for more resources

In the computer vision, object detection has always been considered one of the most challenging issues because it requires classifying and locating objects in the same scene. Many object detection approaches were recently proposed based on deep convolutional neural networks (DCNNs), which have been demonstrated to achieve outstanding object detection performance compared to other approaches. However, the supervised training of DCNNs mostly uses gradient-based optimization criteria, in which all parameters of hidden layers require multiple iterations, and often faces some problems such as local minima, intensive human intervention, time-consuming, etc. In this paper, we propose a new method called Faster-YOLO, which is able to perform real-time object detection. The deep random kernel convolutional extreme learning machine (DRKCELM) and double hidden layer extreme learning machine auto-encoder (DLELM-AE) joint network is used as a feature extractor for object detection, which integrating the advantages of ELM-LRF and ELM-AE. It takes the raw images directly as input and thus is suitable for the different datasets. In addition, most connection weights are randomly generated, so there are few parameter settings and training speed is faster. The experiment results on Pascal VOC dataset show that Faster-YOLO improves the detection accuracy effectively by 1.1 percentage points compared to the original YOLOv2, and an average 2X speedup compared to YOLOv3. (C) 2020 Elsevier Inc. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available