4.6 Article

Spatiotemporal Exogenous Variables Enhanced Model for Traffic Flow Prediction

Journal

IEEE ACCESS
Volume 11, Issue -, Pages 95958-95973

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/ACCESS.2023.3311818

Keywords

Predictive models; Spatiotemporal phenomena; Roads; Data models; Hidden Markov models; Transformers; Traffic control; Traffic flow prediction; exogenous variables; spatiotemporal dependencies; graph attention networks; transformer

Ask authors/readers for more resources

Traffic flow prediction is crucial for Intelligent Transportation Systems (ITS), but accurately predicting traffic flow for a large-scale road network is challenging due to complex and dynamic spatiotemporal dependencies. In this paper, we propose a Spatiotemporal Exogenous Variables Enhanced Transformer (SEE-Transformer) model that incorporates Graph Attention Networks and Transformer architectures to capture these dependencies. The model leverages rich exogenous variables and constructs traffic graphs based on sensor connections and pattern similarity for improved prediction accuracy.
Traffic flow prediction is a vital component of Intelligent Transportation Systems (ITS). However, it is extremely challenging to predict traffic flow accurately for a large-scale road network over multiple time horizons, due to the complex and dynamic spatiotemporal dependencies involved. To address this issue, we propose a Spatiotemporal Exogenous Variables Enhanced Transformer (SEE-Transformer) model, which leverages the Graph Attention Networks and Transformer architectures and incorporates the exogenous variables of traffic data. Specifically, we introduce rich exogenous variables, including spatial and temporal information of traffic data, to enhance the model's ability to capture spatiotemporal dependencies at a network level. We construct traffic graphs based on the social connection of sensors and the traffic pattern similarity of sensors and use them as model inputs along with the exogenous variables. The SEE-Transformer achieves excellent prediction accuracy with the help of the Graph Attention Networks and Transformer mechanisms. Extensive experiments on the PeMS freeway dataset confirm that the SEE-Transformer consistently outperforms current models.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available