4.7 Article

Multi-scale skeleton adaptive weighted GCN for skeleton-based human action recognition in IoT

期刊

APPLIED SOFT COMPUTING
卷 104, 期 -, 页码 -

出版社

ELSEVIER
DOI: 10.1016/j.asoc.2021.107236

关键词

Internet of things (IoT); Skeleton-based human action recognition; Graph convolution network (GCN); Graph topology; Multi-scale

资金

  1. 111 project, China [B17007]
  2. Director Funds of Beijing Key Laboratory of Network System Architecture and Convergence, China [2017BKLNSAC-ZJ-01]

向作者/读者索取更多资源

This paper proposes a multi-scale skeleton adaptive weighted graph convolution network (MSAWGCN) for skeleton-based action recognition, which extracts more abundant spatial features of skeletons through multi-scale skeleton graph convolution network and adopts a simple graph vertex fusion strategy to learn the graph topology adaptively.
Skeleton-based human action recognition has become a hot topic due to its potential advantages. Graph convolution network (GCN) has obtained remarkable performances in the modeling of skeleton-based human action recognition in IoT. In order to capture robust spatial-temporal features from the human skeleton, a powerful feature extractor is essential. However, Most GCN-based methods use the fixed graph topology. Besides, only a single-scale feature is used, and the multi-scale information is ignored. In this paper, we propose a multi-scale skeleton adaptive weighted graph convolution network (MSAWGCN) for skeleton-based action recognition. Specifically, a multi-scale skeleton graph convolution network is adopted to extract more abundant spatial features of skeletons. Moreover, we develop a simple graph vertex fusion strategy, which can learn the latent graph topology adaptively by replacing the handcrafted adjacency matrix with a learnable matrix. According to different sampling strategies, weighted learning method is adopted to enrich features while aggregating. Experiments on three large datasets illustrate that the proposed method achieves comparable performances to state-of-the-art methods. Our proposed method attains an improvement of 0.9% and 0.7% respectively over the recent GCN-based method on the NTU RGB+D and Kinetics dataset. (c) 2021 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据