4.0 Article

Lane Transformer: A High-Efficiency Trajectory Prediction Model

Journal

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/OJITS.2023.3233952

Keywords

Trajectory; Transformers; Predictive models; Roads; Feature extraction; Task analysis; Convolution; Trajectory prediction; transformer; multi-head attention; TensorRT

Ask authors/readers for more resources

Trajectory prediction is crucial for autonomous driving, and our proposed Lane Transformer model achieves high accuracy and efficiency by using attention blocks instead of Graph Convolution Network (GCN) and optimizing it for deployment using TensorRT. Our model outperforms the baseline model in prediction accuracy by a factor of 10x to 25x on the Argoverse dataset and has the fastest inference time among all open source methods.
Trajectory prediction is a crucial step in the pipeline for autonomous driving because it not only improves the planning of future routes, but also ensures vehicle safety. On the basis of deep neural networks, numerous trajectory prediction models have been proposed and have already achieved high performance on public datasets due to the well-designed model structure and complex optimization procedure. However, the majority of these methods overlook the fact that vehicles' limited computing resources can be utilized for online real-time inference. We proposed a Lane Transformer to achieve high accuracy and efficiency in trajectory prediction to tackle this problem. On the one hand, inspired by the well-known transformer, we use attention blocks to replace the commonly used Graph Convolution Network (GCN) in trajectory prediction models, thereby drastically reducing the time cost while maintaining the accuracy. In contrast, we construct our prediction model to be compatible with TensorRT, allowing it to be further optimized and easily transformed into a deployment-friendly form of TensorRT. Experiments demonstrate that our model outperforms the baseline LaneGCN model in quantitative prediction accuracy on the Argoverse dataset by a factor of 10x to 25x. Our 7ms inference time is the fastest among all open source methods currently available.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.0
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available