4.6 Article

Visual-semantic graph neural network with pose-position attentive learning for group activity recognition

Journal

NEUROCOMPUTING
Volume 491, Issue -, Pages 217-231

Publisher

ELSEVIER
DOI: 10.1016/j.neucom.2022.03.066

Keywords

Group activity recognition; Graph neural network; Visual-semantic context; Pose-position attentive learning

Ask authors/readers for more resources

The article proposes a method for recognizing group activities based on visual-semantic graph neural network and pose-position attentive learning. The method improves the recognition performance of group activities by constructing a bi-modal visual graph and a semantic graph, and utilizing pose and position information for attention aggregation.
Video-based group activities typically contain interactive contexts among diverse visual modalities between multiple persons, and semantic relationships between individual actions. Nevertheless, majority of the existing methods for recognizing group activity either captures the relationships among different persons by utilizing a solely RGB modality or neglect to exploit the label hierarchies between individual actions and the group activity. To tackle these issues, we propose a visual-semantic graph neural network, with pose-position attentive learning (VSGNN-PAL), for group activity recognition. Specifically, we first extract the individual-level appearance and motion representations from RGB and optical-flow inputs, to build a bi-modal visual graph. Two attentive aggregators are further proposed to integrate both the pose and position information to measure the relevance scores between persons, and dynamically refine the representation of each visual node from both modality-specific and cross-modal perspectives. To model a semantic hierarchy from a label space, we construct a semantic graph based on the linguistic embeddings of individual actions and group activity labels. We further employ a bi-directional mapping learning scheme, to integrate the label-relation-aware semantic context into the visual representations. Besides, a global reasoning module is introduced to progressively generate the group-level representations with the scene description maintained. Furthermore, we formulate a semantic-preserving loss, to maintain the consistency between the learned high-level representations and the semantics of the ground-truth labels. Experimental results on three group activity benchmarks demonstrate that the proposed method achieves state-of-the-art performance.(c) 2022 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available