4.6 Article

ViT-LLMR: Vision Transformer-based lower limb motion recognition from fusion signals of MMG and IMU

期刊

出版社

ELSEVIER SCI LTD
DOI: 10.1016/j.bspc.2022.104508

关键词

Mechanomyography; Vision Transformer; Attention mechanism; Signal fusion

向作者/读者索取更多资源

This paper proposes a Vision Transformer (ViT)-based architecture for lower limb motion recognition using multichannel Mechanomyography (MMG) signals and kinematic data. The proposed architecture can avoid model training problems and achieve a high accuracy of 94.62% in recognizing eight lower limb motions.
One of the key problems in lower limb-based human-computer interaction (HCI) technology is to use wearable devices to recognize the wearer's lower limb motions. The information commonly used to discriminate human motion mainly includes biological and kinematic signals. Considering that unimodal signals do not provide enough information to recognize lower limb movements, in this paper, we proposed a Vision Transformer (ViT)based architecture for lower limb motion recognition from multichannel Mechanomyography (MMG) signals and kinematic data. Firstly, we applied the self-attention mechanism to enhance each input channel signal. Then the data was fed into ViT model. Vision Transformer-based Lower Limb Motion Recognition (ViT - LLMR) architecture proposed in this paper can avoid the model training problems such as autonomous feature extraction and feature selection for machine learning, and the model can recognize eight lower limb motions containing six subjects with an accuracy of 94.62%. In addition, we analyzed the generalization ability of the model when undersampling and only collecting fragment signals. In conclusion, the proposed ViT - LLMR architecture could provide a basis for practical applications in different HCI fields.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据