4.6 Article

Deep Learning Movement Intent Decoders Trained With Dataset Aggregation for Prosthetic Limb Control

期刊

IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING
卷 66, 期 11, 页码 3192-3203

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TBME.2019.2901882

关键词

Biomedical signal processing; deep learning; Kalman filter; learning systems; machine learning; motor intent decoder; neural engineering; neural network; reinforcement learning

资金

  1. National Science Foundation [1533649]
  2. Hand Proprioception and Touch Interfaces (HAPTIX) program [N66001-15-C-4017]
  3. Direct For Social, Behav & Economic Scie
  4. Division Of Behavioral and Cognitive Sci [1533649] Funding Source: National Science Foundation

向作者/读者索取更多资源

Significance: The performance of traditional approaches to decoding movement intent from electromyograms (EMGs) and other biological signals commonly degrade over time. Furthermore, conventional algorithms for training neural network based decoders may not perform well outside the domain of the state transitions observed during training. The work presented in this paper mitigates both these problems, resulting in an approach that has the potential to substantially improve the quality of life of the people with limb loss. Objective: This paper presents and evaluates the performance of four decoding methods for volitional movement intent from intramuscular EMG signals. Methods: The decoders are trained using the dataset aggregation (DAgger) algorithm, in which the training dataset is augmented during each training iteration based on the decoded estimates from previous iterations. Four competing decoding methods, namely polynomial Kalman filters (KFs), multilayer perceptron (MLP) networks, convolutional neural networks (CNN), and long short-term memory (LSTM) networks, were developed. The performances of the four decoding methods were evaluated using EMG datasets recorded from two human volunteers with transradial amputation. Short-term analyses, in which the training and cross-validation data came from the same dataset, and long-term analyses, in which the training and testing were done in different datasets, were performed. Results: Short-term analyses of the decoders demonstrated that CNN and MLP decoders performed significantly better than KF and LSTM decoders, showing an improvement of up to 60% in the normalized mean-square decoding error in cross-validation tests. Long-term analyses indicated that the CNN, MLP, and LSTM decoders performed significantly better than a KF-based decoder at most analyzed cases of temporal separations (0-150 days) between the acquisition of the training and testing datasets. Conclusion: The short-term and long-term performances of MLP- and CNN-based decoders trained with DAgger demonstrated their potential to provide more accurate and naturalistic control of prosthetic hands than alternate approaches.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据