4.7 Article

Vehicle weight identification system for spatiotemporal load distribution on bridges based on non-contact machine vision technology and deep learning algorithms

期刊

MEASUREMENT
卷 159, 期 -, 页码 -

出版社

ELSEVIER SCI LTD
DOI: 10.1016/j.measurement.2020.107801

关键词

Bridge health monitoring; Vehicle rough-grained classification; Vehicle tracking; Non-contact machine vision; Deep learning algorithms; Automatic identification system

资金

  1. National Natural Science Foundation of China (NSFC) [51878264]
  2. National Key Research and Development Program of China [2016YFC0701400, 2016YFC0701308]
  3. Key Research and Development Program of Changsha City [kq1801010]

向作者/读者索取更多资源

Accurate information regarding the weight of vehicle loads plays a significant role in maintaining the structural health of bridges. However, the only method currently available for ascertaining the weight of loads is the bridge weigh-in-motion (BWIM) system, which is not widely used because of the high cost of the large device involved. There is therefore a need to develop an effective, low-cost technology to ascertain vehicle loads in relation to spatiotemporal load distribution on long-span bridges. This paper proposes a non-contact vehicle identification methodology to distinguish a vehicle from its load based on machine vision technology and deep learning algorithms. The vehicle information (i.e., type, weight, position, and motion trajectory, etc.) is conveniently obtained from a roadside monitoring surveillance camera, while the axle-weight distribution interval for nine classified vehicle types is obtained from the statistical information of 8402 delivery vehicles from which the relationship between a unique vehicle type and the corresponding weight information is established. Meanwhile, a dataset containing 8624 vehicle images was established for training the deep convolutional neural network (DCNN), where nine rough-grained vehicle classifications were contained in order to enhance the generalizability of the network. Optimization analysis was conducted to improve the network accuracy in vehicle types identification. The position of vehicles can also be effectively detected by a faster region-based convolutional neural network (Faster R-CNN), where the pre-trained DCNN with 98.17% vehicle types classification accuracy is employed as the co-shared network layer to enhance computation efficiency. Utilizing the object detection results from the Faster R-CNN and utilizing a Kalman filter, a vehicle in motion could also be simultaneously real-time tracked by the monitoring video, while a graphical user interface (GUI) incorporated into the video camera enabled automatic identification. A post-processing module has been established based on the proposed method, and a field test was conducted to validate the reliability of the system. (C) 2020 Elsevier Ltd. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据