4.6 Article

Dual-attention Network for View-invariant Action Recognition

Journal

COMPLEX & INTELLIGENT SYSTEMS
Volume -, Issue -, Pages -

Publisher

SPRINGER HEIDELBERG
DOI: 10.1007/s40747-023-01171-8

Keywords

Human action recognition; Self-attention; Cross-attention; Dual-attention; Attention transfer; View-invariant representation

Ask authors/readers for more resources

This paper proposes a Dual-Attention Network (DANet) for robust video representation in view-invariant action recognition. The DANet consists of relation-aware spatiotemporal self-attention and spatiotemporal cross-attention modules, which capture dependencies and generate discriminative features for semantic representations of actions in different views. Experimental results demonstrate that the proposed approach significantly outperforms state-of-the-art approaches in view-invariant action recognition.
View-invariant action recognition has been widely researched in various applications, such as visual surveillance and human-robot interaction. However, view-invariant human action recognition is challenging due to the action occlusions and information loss caused by view changes. Modeling spatiotemporal dynamics of body joints and minimizing representation discrepancy between different views could be a valuable solution for view-invariant human action recognition. Therefore, we propose a Dual-Attention Network (DANet) aims to learn robust video representation for view-invariant action recognition. The DANet is composed of relation-aware spatiotemporal self-attention and spatiotemporal cross-attention modules. The relation-aware spatiotemporal self-attention module learns representative and discriminative action features. This module captures local and global long-range dependencies, as well as pairwise relations among human body parts and joints in the spatial and temporal domains. The cross-attention module learns view-invariant attention maps and generates discriminative features for semantic representations of actions in different views. We exhaustively evaluate our proposed approach on the NTU-60, NTU-120, and UESTC large-scale challenging datasets with multi-type evaluation metrics including Cross-Subject, Cross-View, Cross-Set, and Arbitrary-view. The experimental results demonstrate that our proposed approach significantly outperforms state-of-the-art approaches in view-invariant action recognition.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available