4.6 Article

Spatiotemporal Exogenous Variables Enhanced Model for Traffic Flow Prediction

期刊

IEEE ACCESS
卷 11, 期 -, 页码 95958-95973

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/ACCESS.2023.3311818

关键词

Predictive models; Spatiotemporal phenomena; Roads; Data models; Hidden Markov models; Transformers; Traffic control; Traffic flow prediction; exogenous variables; spatiotemporal dependencies; graph attention networks; transformer

向作者/读者索取更多资源

Traffic flow prediction is crucial for Intelligent Transportation Systems (ITS), but accurately predicting traffic flow for a large-scale road network is challenging due to complex and dynamic spatiotemporal dependencies. In this paper, we propose a Spatiotemporal Exogenous Variables Enhanced Transformer (SEE-Transformer) model that incorporates Graph Attention Networks and Transformer architectures to capture these dependencies. The model leverages rich exogenous variables and constructs traffic graphs based on sensor connections and pattern similarity for improved prediction accuracy.
Traffic flow prediction is a vital component of Intelligent Transportation Systems (ITS). However, it is extremely challenging to predict traffic flow accurately for a large-scale road network over multiple time horizons, due to the complex and dynamic spatiotemporal dependencies involved. To address this issue, we propose a Spatiotemporal Exogenous Variables Enhanced Transformer (SEE-Transformer) model, which leverages the Graph Attention Networks and Transformer architectures and incorporates the exogenous variables of traffic data. Specifically, we introduce rich exogenous variables, including spatial and temporal information of traffic data, to enhance the model's ability to capture spatiotemporal dependencies at a network level. We construct traffic graphs based on the social connection of sensors and the traffic pattern similarity of sensors and use them as model inputs along with the exogenous variables. The SEE-Transformer achieves excellent prediction accuracy with the help of the Graph Attention Networks and Transformer mechanisms. Extensive experiments on the PeMS freeway dataset confirm that the SEE-Transformer consistently outperforms current models.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据