4.7 Article

TCSA-Net: A Temporal-Context-Based Self-Attention Network for Next Location Prediction

Journal

IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS
Volume 23, Issue 11, Pages 20735-20745

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TITS.2022.3181339

Keywords

Trajectory; Predictive models; Transformers; Task analysis; Recurrent neural networks; Markov processes; Feature extraction; Deep learning; location predication; self attention

Funding

  1. National Natural Science Foundation of China [U1811463, 62072069]

Ask authors/readers for more resources

In this paper, a temporal-context-based self-attention network named TCSA-Net is proposed, which can simultaneously exploit long-and short-term movement preferences from sparse and long trajectories. The network outperforms state-of-the-art methods in terms of standard evaluation metrics, thanks to its novel two-stage self-attention architecture and multi-modal embedding layer.
Next location prediction aims to find the location that the user will visit next. It plays a fundamental role for location-based applications. However, the heterogeneity and sparsity of the trajectory data pose great challenges to the task. Recently, RNN-based methods have shown promising performance in learining the spatio-temporal characteristics of the trajectory. While the effectiveness of location prediction has been improved, the computational efficiency and the long-term preferences still leave space for further research. The self-attention mechanism is viewed as a promising solution for parallel computation and exploiting sequential regularities from sparse data. But the huge memory cost and the neglect of temporal information make it infeasible to directly modeling human mobility regularities. In this paper, we propose a temporal-context-based self-attention network named TCSA-Net, which can simultaneously exploit long-and short-term mvoement preferences from sparse and long trajectories. In particular, we design a novel two-stage self-attention architecture that can learn long-term dependency under constrained memory budget. Further, we propose a multi-modal embedding layer to model two complementary temporal contexts and provide more abundant temporal and sequential information. Extensive experiments on two real-life datasets show that the TCSA-Net significantly outperforms the state-of-the-art methods in terms of standard evaluation metrics.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available