4.7 Article

DeepTrend 2.0: A light-weighted multi-scale traffic prediction model using detrending

Journal

Publisher

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.trc.2019.03.022

Keywords

Traffic prediction; Deep learning; Detrending; Multi-scale traffic prediction

Funding

  1. National Natural Science Foundation of China [61533019, U1811463]
  2. Beijing Municipal Science and Technology Commission Program [D171100000317002]
  3. Beijing Municipal Commission of Transport Program [ZC179074Z]

Ask authors/readers for more resources

In this paper, we propose a detrending based and deep learning based many-to-many traffic prediction model called DeepTrend 2.0 that accepts information collected from multiple sensors as input and simultaneously generates the prediction for all the sensors as output. First, we demonstrate that detrending brings advantages to traffic prediction, even when deep learning models are considered. Second, the proposed model strikes a delicate balance between model complexity and accuracy. In contrast to the existing models that view a sensor network as a weighted graph and use graph convolutional neural networks (GCNN) to model spatial dependency, we represent a sensor network as an image and propose a convolutional neural network (CNN) as the prediction model. The image is generated by the correlation coefficient between the flow series of sensors, which is different from other CNN based prediction approaches that convert the transportation network into an image by the spatial location of sensors or regions. Compared with the GCNN based model, the CNN based DeepTrend 2.0 can achieve much faster convergence during training, and it guarantees similar prediction quality. Test results indicate that the proposed light-weighted model is efficient and easy to transfer and deploy.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available