4.7 Article

Event-Driven Edge Deep Learning Decoder for Real-Time Gesture Classification and Neuro-Inspired Rehabilitation Device Control

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIM.2023.3323962

关键词

Electroencephalography (EEG); electromyography (EMG); hand gesture; human-machine interface (HMI); tinyML

向作者/读者索取更多资源

This study presents an event-driven deep neural network capable of classifying gestures from single or hybrid biosignals. The results show that hybrid biosignals outperform single-modal EEG in gesture classification offline and on-device, as well as single-modal EMG in the case of EMG electrode shift. Additionally, the study demonstrates an end-to-end approach that deploys a DNN decoder to an edge device for neuro-inspired control of the dexterous hand without requiring an Internet-of-Things (IoT) connection.
Assistive neuro-inspired rehabilitation devices are essential for people who have suffered a spinal cord injury (SCI), stroke, or limb amputation in their activities of daily living. Neuro-inspired rehabilitation devices typically use a single-modal biosignal with a conventional machine learning algorithm on an embedded edge device for gesture classification. Although deep learning decoders provide high-accuracy gesture classification, the mismatch in the computational complexity and resource availability of edge devices has limited the deployment of real-time gesture inference on embedded devices. In this study, we describe an event-driven, edge-compatible deep neural network (DNN) capable of classifying gestures from a single or hybrid biosignal detected at the edge. The DNN-based decoders were deployed on a field-programmable gate array (FPGA) to classify motor intent acquired from the biosensors for intuitive control of a 3-D-printed upper limb rehabilitation device. The study was validated with 33 subjects offline and on-device. Offline average classification accuracy of 93.14% for single-modal electromyography (EMG), EMG-Net, 50.42% for single-modal electroencephalography (EEG), EEG-Net, and 93.35% for hybrid-modal biosignal (Hybrid-Net) using the 8-bit fixed-point quantization-aware method was obtained, while the real-time inference on the FPGA resulted in 94.97%, 58.27%, and 92.73%, respectively. The EMG biosensor shifted 5 cm to examine model degradation yielded 11.5% and 2.64% accuracy loss for the on-device EMG-Net and Hybrid-Net, respectively. The event-driven algorithm implemented performed with a reliability of 1, ensuring inference with voluntary gesture grasp. The study reports that hybrid biosignals outperformed single-modal EEG in gesture classification offline and on-device and single-modal EMG in case of EMG electrode shift. In addition, this article demonstrates an end-to-end approach that deploys a DNN decoder to an edge device for neuro-inspired control of the dexterous hand devoid of an Internet-of-Things (IoT) connection. The data and code are available at the following repository: https://github.com/HumanMachineInterface/Gest-Infer.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据