3.8 Proceedings Paper

Understanding Traffic Density from Large-Scale Web Camera Data

出版社

IEEE
DOI: 10.1109/CVPR.2017.454

关键词

-

资金

  1. Fundacao para a Ciencia e a Tecnologia (project FCT) [SFRH/BD/113729/2015]
  2. Fundacao para a Ciencia e a Tecnologia (Carnegie Mellon-Portugal program)
  3. SmartCitySense - ANI [Lx-01-0247-FEDER-017906]
  4. Fundação para a Ciência e a Tecnologia [SFRH/BD/113729/2015] Funding Source: FCT

向作者/读者索取更多资源

Understanding traffic density from large-scale web camera (webcam) videos is a challenging problem because such videos have low spatial and temporal resolution, high occlusion and large perspective. To deeply understand traffic density, we explore both optimization based and deep learning based methods. To avoid individual vehicle detection or tracking, both methods map the dense image feature into vehicle density, one based on rank constrained regression and the other based on fully convolutional networks (FCN). The regression based method learns different weights for different blocks of the image to embed road geometry and significantly reduce the error induced by camera perspective. The FCN based method jointly estimates vehicle density and vehicle count with a residual learning framework to perform end-to-end dense prediction, allowing arbitrary image resolution, and adapting to different vehicle scales and perspectives. We analyze and compare both methods, and get insights from optimization based method to improve deep model. Since existing datasets do not cover all the challenges in our work, we collected and labelled a large-scale traffic video dataset, containing 60 million frames from 212 webcams. Both methods are extensively evaluated and compared on different counting tasks and datasets. FCN based method significantly reduces the mean absolute error (MAE) from 10.99 to 5.31 on the public dataset TRANCOS compared with the state-of-the-art baseline.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据