3.8 Proceedings Paper

Revisiting Skeleton-based Action Recognition

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/CVPR52688.2022.00298

Keywords

-

Funding

  1. General Research Funds of Hong Kong [14203518]
  2. RIE2020 Industry Alignment Fund-Industry Collaboration Projects (IAF-ICP) Funding Initiative
  3. Shanghai Committee of Science and Technology [20DZ1100800]

Ask authors/readers for more resources

This paper proposes a new approach called PoseConv3D for skeleton-based action recognition. Compared to GCN-based methods, PoseConv3D is more effective in learning spatiotemporal features, more robust against pose estimation noises, and generalizes better in cross-dataset settings. It can also handle multiple-person scenarios without additional computation costs and can be easily integrated with other modalities.
Human skeleton, as a compact representation of human action, has received increasing attention in recent years. Many skeleton-based action recognition methods adopt GCNs to extract features on top of human skeletons. Despite the positive results shown in these attempts, GCN-based methods are subject to limitations in robustness, interoperability, and scalability. In this work, we propose PoseConv3D, a new approach to skeleton-based action recognition. PoseConv3D relies on a 3D heatmap volume instead of a graph sequence as the base representation of human skeletons. Compared to GCN-based methods, PoseConv3D is more effective in learning spatiotemporal features, more robust against pose estimation noises, and generalizes better in cross-dataset settings. Also, PoseConv3D can handle multiple-person scenarios without additional computation costs. The hierarchical features can be easily integrated with other modalities at early fusion stages, providing a great design space to boost the performance. PoseConv3D achieves the state-of-the-art on five of six standard skeleton-based action recognition benchmarks. Once fused with other modalities, it achieves the state-of-the-art on all eight multi-modality action recognition benchmarks. Code has been made available at: https://github.com/kennymckormick/pyskl.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available