3.8 Proceedings Paper

Context-Aware Scene Prediction Network (CASPNet)

Publisher

IEEE
DOI: 10.1109/ITSC55140.2022.9921850

Keywords

-

Ask authors/readers for more resources

Predicting the future motion of surrounding road users is crucial and challenging for autonomous driving and driver-assistance systems. This study proposes a method based on CNN and RNN to jointly learn and predict the motion of all road users in a scene. Evaluation on the nuScenes dataset shows that the proposed method achieves state-of-the-art results in prediction benchmark.
Predicting the future motion of surrounding road users is a crucial and challenging task for autonomous driving (AD) and various advanced driver-assistance systems (ADAS). Planning a safe future trajectory heavily depends on understanding the traffic scene and anticipating its dynamics. The challenges do not only lie in understanding the complex driving scenarios but also the numerous possible interactions among road users and environments, which are practically not feasible for explicit modeling. In this work, we tackle the above challenges by jointly learning and predicting the motion of all road users in a scene, using a novel convolutional neural network (CNN) and recurrent neural network (RNN) based architecture. Moreover, by exploiting grid-based input and output data structures, the computational cost is independent of the number of road users and multi-modal predictions become inherent properties of our proposed method. Evaluation on the nuScenes dataset shows that our approach reaches state-of-theart results in the prediction benchmark.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available