4.7 Article

A Multimodal Dynamic Hand Gesture Recognition Based on Radar-Vision Fusion

Journal

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIM.2023.3253906

Keywords

Radar; Sensors; Gesture recognition; Feature extraction; Hidden Markov models; Cameras; Reliability; Deep learning; frequency-modulated continuous-wave (FMCW); hand gesture recognition (HGR); millimeter-wave (MMW); multimodal fusion

Ask authors/readers for more resources

This paper proposes a multimodal dynamic hand gesture recognition method based on a two-branch fusion deformable network with Gram matching. It effectively improves the adaptability of the classifier to complex environments and exhibits satisfactory robustness to multiple subjects.
Regarding increasingly complex scenarios in hand gesture recognition (HGR), it is challenging to implement a reliable HGR due to the nonadaptability of individual sensors to the environment and the discrepancy of personal habits. Multisensor fusion has been deemed an effective way to overcome the limitations of a single sensor. However, there is a lack of research on HGR to effectively establish bridges linking multimodal heterogeneous information. To address this issue, we propose a novel multimodal dynamic HGR method based on a two-branch fusion deformable network with Gram matching. First, a time-synchronized method is designed to preprocess the multimodal data. Second, a two-branch network is proposed to implement gesture classification based on radar-vision fusion. The input convolution is replaced by the deformable convolution to improve the generalization of gesture motion modeling. The long short-term memory (LSTM) unit is used to extract the temporal features of dynamic hand gestures. Third, Gram matching is presented as a loss function to mine high-dimensional heterogeneous information and maintain the integrity of radar-vision fusion. The experimental results indicate that the proposed method effectively improves the adaptability of the classifier to complex environments and exhibits satisfactory robustness to multiple subjects. Furthermore, ablation analysis shows that deformable convolution and Gram loss not only provide reliable gesture recognition but also enhance the generalization ability of the proposed methods in different field-of-view scenarios.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available