4.6 Article

Faster CNN-based vehicle detection and counting strategy for fixed camera scenes

期刊

MULTIMEDIA TOOLS AND APPLICATIONS
卷 81, 期 18, 页码 25443-25471

出版社

SPRINGER
DOI: 10.1007/s11042-022-12370-9

关键词

Vehicle detection and counting; Convolution neural network; Fixed camera; Faster R-CNN; YOLOv2; Featue point analysis

资金

  1. Science, Technology & Innovation Funding Authority (STDF)
  2. Egyptian Knowledge Bank (EKB)

向作者/读者索取更多资源

This paper proposes an efficient real-time approach for the detection and counting of moving vehicles based on YOLOv2 and features point motion analysis. By synchronously detecting and tracking vehicle features, accurate counting results are achieved. Experimental results show that the proposed method outperforms existing strategies and improves computational efficiency.
Automatic detection and counting of vehicles in a video is a challenging task and has become a key application area of traffic monitoring and management. In this paper, an efficient real-time approach for the detection and counting of moving vehicles is presented based on YOLOv2 and features point motion analysis. The work is based on synchronous vehicle features detection and tracking to achieve accurate counting results. The proposed strategy works in two phases; the first one is vehicle detection and the second is the counting of moving vehicles. Different convolutional neural networks including pixel by pixel classification networks and regression networks are investigated to improve the detection and counting decisions. For initial object detection, we have utilized state-of-the-art faster deep learning object detection algorithm YOLOv2 before refining them using K-means clustering and KLT tracker. Then an efficient approach is introduced using temporal information of the detection and tracking feature points between the framesets to assign each vehicle label with their corresponding trajectories and truly counted it. Experimental results on twelve challenging videos have shown that the proposed scheme generally outperforms state-of-the-art strategies. Moreover, the proposed approach using YOLOv2 increases the average time performance for the twelve tested sequences by 93.4% and 98.9% from 1.24 frames per second achieved using Faster Region-based Convolutional Neural Network (F R-CNN ) and 0.19 frames per second achieved using the background subtraction based CNN approach (BS-CNN ), respectively to 18.7 frames per second.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据