4.7 Article

Distributional and spatial-temporal robust representation learning for transportation activity recognition

期刊

PATTERN RECOGNITION
卷 140, 期 -, 页码 -

出版社

ELSEVIER SCI LTD
DOI: 10.1016/j.patcog.2023.109568

关键词

Transportation activity recognition; Multimodal sensing; Deep learning; Statistical feature; Spatial -temporal feature

向作者/读者索取更多资源

Transportation activity recognition (TAR) is crucial for intelligent transportation applications. This study proposes a novel parallel model, DSTRR, which combines automatic learning of statistical, spatial, and temporal features to achieve a robust representation.
Transportation activity recognition (TAR) provides valuable support for intelligent transportation applications, such as urban transportation planning, driving behavior analysis, and traffic prediction. There are many advantages of movable sensor-based TAR, and the key challenge is to capture salient features from segmented data for representing diverse patterns of activity. Although existing methods based on statistical information are efficient, they usually rely on domain knowledge to construct high-quality features manually. Likewise, the methods based on spatial-temporal relationships achieve good performance but fail to extract statistical features. The features extracted by these two methods have proven to be crucial for the classification of activity. How to combine them to acquire a more robust representation remains an open question. In this work, we introduce a novel parallel model named Distributional and SpatialTemporal Robust Representation (DSTRR), which combines automatic learning of statistical, spatial, and temporal features into a unified framework. This model leads to three optimized subnets and thus obtains a robust representation specific to TAR. Extensive experiments performed on three public datasets show that DSTRR is a state-of-the-art method compared with the baseline methods. The results of ablation study and visualization not only demonstrate the effectiveness of each component in DSTRR, but also show the model remains robust to a wide range of parameter variations. (c) 2023 Elsevier Ltd. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据