4.6 Article

EMG-Based 3D Hand Motor Intention Prediction for Information Transfer from Human to Robot

Journal

SENSORS
Volume 21, Issue 4, Pages -

Publisher

MDPI
DOI: 10.3390/s21041316

Keywords

Electromyography (EMG); 3-D movement; continuous motion; motor intention; hand motion

Funding

  1. National Natural Science Foundation of China [51975052]
  2. Beijing Natural Science Foundation [4162055]

Ask authors/readers for more resources

This study successfully predicted the 3-D hand position of complex movements using an RFNN model, achieving an average performance of CC = 0.85 and NRMSE = 0.105. While predictions were slightly better for tasks involving quick movements, the difference in accuracy between quick and slow motions was insignificant.
(1) Background: Three-dimensional (3-D) hand position is one of the kinematic parameters that can be inferred from Electromyography (EMG) signals. The inferred parameter is used as a communication channel in human-robot collaboration applications. Although its application from the perspective of rehabilitation and assistive technologies are widely studied, there are few papers on its application involving healthy subjects such as intelligent manufacturing and skill transfer. In this regard, for tasks associated with complex hand trajectories without the consideration of the degree of freedom (DOF), the prediction of 3-D hand position from EMG signal alone has not been addressed. (2) Objective: The primary aim of this study is to propose a model to predict human motor intention that can be used as information from human to robot. Therefore, the prediction of a 3-D hand position directly from the EMG signal for complex trajectories of hand movement, without the direct consideration of joint movements, is studied. In addition, the effects of slow and fast motions on the accuracy of the prediction model are analyzed. (3) Methods: This study used the EMG signal that is collected from the upper limb of healthy subjects, and the position signal of the hand while the subjects manipulate complex trajectories. We considered and analyzed two types of tasks with complex trajectories, each with quick and slow motions. A recurrent fuzzy neural network (RFNN) model was constructed to predict the 3-D position of the hand from the features of EMG signals alone. We used the Pearson correlation coefficient (CC) and normalized root mean square error (NRMSE) as performance metrics. (4) Results: We found that 3-D hand positions of the complex movement can be predicted with the mean performance of CC = 0.85 and NRMSE = 0.105. The 3-D hand position can be predicted well within a future time of 250 ms, from the EMG signal alone. Even though tasks performed under quick motion had a better prediction performance; the statistical difference in the accuracy of prediction between quick and slow motion was insignificant. Concerning the prediction model, we found that RFNN has a good performance in decoding for the time-varying system. (5) Conclusions: In this paper, irrespective of the speed of the motion, the 3-D hand position is predicted from the EMG signal alone. The proposed approach can be used in human-robot collaboration applications to enhance the natural interaction between a human and a robot.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available