4.7 Article

Hand gesture recognition framework using a lie group based spatio-temporal recurrent network with multiple hand-worn motion sensors

Journal

INFORMATION SCIENCES
Volume 606, Issue -, Pages 722-741

Publisher

ELSEVIER SCIENCE INC
DOI: 10.1016/j.ins.2022.05.085

Keywords

Hand gesture recognition; Lie group; Motion modeling; Wearable sensors

Funding

  1. National Major Science and Technology Projects of China [2018AAA0100703]
  2. National Natural Science Foundation of China [61977012, 61977054]
  3. China Scholar-ship Council [201906995003]
  4. Central Universities in China [2021CDJYGRH011, 2017CDJSK06PT10, 2020CDJSK06PT14]
  5. Program for Innovation Research Groups at Institutions of Higher Education in Chongqing [CXQT21032]
  6. Key Research Programme of Chongqing Science AMP
  7. Technology Commission [cstc2019jscx-fxydX0054]

Ask authors/readers for more resources

The goal of hand gesture recognition with wearables is to enable gestural user interfaces in mobile and ubiquitous environments. This research addresses the challenges of diverse hand gestures and the need for accurate motion representation and modeling. The proposed framework, STGauntlet, effectively characterizes hand motion context and achieves high accuracy in real-time gesture recognition.
The primary goal of hand gesture recognition with wearables is to facilitate the realization of gestural user interfaces in mobile and ubiquitous environments. A key challenge in wearable-based hand gesture recognition is the fact that a hand gesture can be performed in several ways, with each consisting of its own configuration of motions and their spatiotemporal dependencies. However, the existing methods generally focus on the characteristics of a single point on hand, but ignores the diversity of motion information over hand skeleton, and as a result, they suffer from two key challenges to characterize hand gestures over multiple wearable sensors: motion representation and motion modeling. This leads us to define a spatio-temporal framework, named STGauntlet, that explicitly characterizes the hand motion context of spatio-temporal relations among multiple bones and detects hand gestures in real-time. In particular, our framework incorporates Lie group-based representation to capture the inherent structural varieties of hand motions with spatio-temporal dependencies among multiple bones. To evaluate our framework, we developed a handworn prototype device with multiple motion sensors. Our in-lab study on a dataset collected from nine subjects suggests our approach significantly outperforms the state-ofthe-art methods with the achievement of 98.2% and 95.6% average accuracies for subject dependent and independent gesture recognition, respectively. Specifically, we also show in-wild applications that highlight the interaction capability of our framework. (c) 2022 Elsevier Inc. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available