4.5 Article

View-Adaptive Graph Neural Network for Action Recognition

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCDS.2022.3204905

关键词

Three-dimensional displays; Joints; Bones; Cameras; Urban areas; Transforms; Spatiotemporal phenomena; 3-D skeleton; action recognition; graph convolution neural network; view adaptive (VA)

向作者/读者索取更多资源

This article proposes a view-adaptive mechanism that transforms the skeleton view into a more consistent virtual perspective, reducing the influence of view variations.
Skeleton-based recognition of human actions has received attention in recent years because of the popularity of 3-D acquisition sensors. Existing studies use 3-D skeleton data from video clips collected from several views. The body view shifts from the camera perspective when humans perform certain actions, resulting in unstable and noisy skeletal data. In this article, we developed a view-adaptive (VA) mechanism that identifies the viewpoints across the sequence and transforms the skeleton view through a data-driven learning process to counteract the influence of variations. Most existing methods use fixed human-defined prior criterion to reposition skeletons. We utilized an unsupervised reposition approach and jointly designed a VA neural network based on the graph neural network (GNN). Our VA-GNN model can transform the skeletons of distinct views into a considerably more consistent virtual perspective over preprocessing approach. The VA module learns the best observed view because it determines the most suitable view and transforms the skeletons from the action sequence for end-to-end recognition along with suited graph topology with adaptive GNN. Thus, our strategy reduces the influence of view variance, allowing networks to focus on learning action-specific properties and resulting in improved performance. The accuracy achieved by the experiments on the four benchmark data sets.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据