4.6 Article

Visual-semantic graph neural network with pose-position attentive learning for group activity recognition

期刊

NEUROCOMPUTING
卷 491, 期 -, 页码 217-231

出版社

ELSEVIER
DOI: 10.1016/j.neucom.2022.03.066

关键词

Group activity recognition; Graph neural network; Visual-semantic context; Pose-position attentive learning

向作者/读者索取更多资源

The article proposes a method for recognizing group activities based on visual-semantic graph neural network and pose-position attentive learning. The method improves the recognition performance of group activities by constructing a bi-modal visual graph and a semantic graph, and utilizing pose and position information for attention aggregation.
Video-based group activities typically contain interactive contexts among diverse visual modalities between multiple persons, and semantic relationships between individual actions. Nevertheless, majority of the existing methods for recognizing group activity either captures the relationships among different persons by utilizing a solely RGB modality or neglect to exploit the label hierarchies between individual actions and the group activity. To tackle these issues, we propose a visual-semantic graph neural network, with pose-position attentive learning (VSGNN-PAL), for group activity recognition. Specifically, we first extract the individual-level appearance and motion representations from RGB and optical-flow inputs, to build a bi-modal visual graph. Two attentive aggregators are further proposed to integrate both the pose and position information to measure the relevance scores between persons, and dynamically refine the representation of each visual node from both modality-specific and cross-modal perspectives. To model a semantic hierarchy from a label space, we construct a semantic graph based on the linguistic embeddings of individual actions and group activity labels. We further employ a bi-directional mapping learning scheme, to integrate the label-relation-aware semantic context into the visual representations. Besides, a global reasoning module is introduced to progressively generate the group-level representations with the scene description maintained. Furthermore, we formulate a semantic-preserving loss, to maintain the consistency between the learned high-level representations and the semantics of the ground-truth labels. Experimental results on three group activity benchmarks demonstrate that the proposed method achieves state-of-the-art performance.(c) 2022 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据