3.8 Proceedings Paper

TokenPose: Learning Keypoint Tokens for Human Pose Estimation

Publisher

IEEE
DOI: 10.1109/ICCV48922.2021.01112

Keywords

-

Funding

  1. National Key R&D Plan of the Ministry of Science and Technology [2020AAA0104400]
  2. National Key Research and Development Program of China [2018YFB1800204]
  3. National Natural Science Foundation of China [61773117, 61771273]
  4. R&D Program of Shenzhen [JCYJ20180508152204044]

Ask authors/readers for more resources

This paper introduces a novel approach for human pose estimation based on Token representation, which can learn constraint relationships and appearance cues simultaneously, achieving comparable performance with existing methods in experiments.
Human pose estimation deeply relies on visual clues and anatomical constraints between parts to locate keypoints. Most existing CNN-based methods do well in visual representation, however, lacking in the ability to explicitly learn the constraint relationships between keypoints. In this paper, we propose a novel approach based on Token representation for human Pose estimation (TokenPose). In detail, each keypoint is explicitly embedded as a token to simultaneously learn constraint relationships and appearance cues from images. Extensive experiments show that the small and large TokenPose models are on par with state-of-the-art CNN-based counterparts while being more lightweight. Specifically, our TokenPose-S and TokenPose-L achieve 72.5 AP and 75.8 AP on COCO validation dataset respectively, with significant reduction in parameters (down arrow 80.6%; down arrow 56.8%) and GFLOPs (down arrow 75.3%; down arrow 24.7%). Code is publicly available(1).

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available