4.6 Article

Repeated Cross-Scale Structure-Induced Feature Fusion Network for 2D Hand Pose Estimation

Journal

ENTROPY
Volume 25, Issue 5, Pages -

Publisher

MDPI
DOI: 10.3390/e25050724

Keywords

hand pose estimation; RGB image; self-occluded; multi-layer features; feature fusion

Ask authors/readers for more resources

Recently, convolutional neural networks have made significant advancements in hand pose estimation from RGB images. However, accurately inferring occluded keypoints remains challenging. To address this, we propose a new network that leverages contextual information and relationships between different levels of features to learn rich representations of keypoints. Our network, consisting of GlobalNet and RegionalNet, uses a feature pyramid structure to roughly locate hand joints and a four-stage cross-scale feature fusion network to refine keypoint representation learning. Experimental results demonstrate that our method outperforms state-of-the-art approaches on two public datasets, STB and RHD.
Recently, the use of convolutional neural networks for hand pose estimation from RGB images has dramatically improved. However, self-occluded keypoint inference in hand pose estimation is still a challenging task. We argue that these occluded keypoints cannot be readily recognized directly from traditional appearance features, and sufficient contextual information among the keypoints is especially needed to induce feature learning. Therefore, we propose a new repeated cross-scale structure-induced feature fusion network to learn about the representations of keypoints with rich information, 'informed' by the relationships between different abstraction levels of features. Our network consists of two modules: GlobalNet and RegionalNet. GlobalNet roughly locates hand joints based on a new feature pyramid structure by combining higher semantic information and more global spatial scale information. RegionalNet further refines keypoint representation learning via a four-stage cross-scale feature fusion network, which learns shallow appearance features induced by more implicit hand structure information, so that when identifying occluded keypoints, the network can use augmented features to better locate the positions. The experimental results show that our method outperforms the state-of-the-art methods for 2D hand pose estimation on two public datasets, STB and RHD.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available