Format

Send to

Choose Destination
IEEE Trans Biomed Eng. 2019 Feb 26. doi: 10.1109/TBME.2019.2901882. [Epub ahead of print]

Deep Learning Movement Intent Decoders Trained with Dataset Aggregation for Prosthetic Limb Control.

Abstract

SIGNIFICANCE:

The performance of traditional approaches to decoding movement intent from electromyograms (EMGs) and other biological signals commonly degrade over time. Furthermore, conventional algorithms for training neural network-based decoders may not perform well outside the domain of the state transitions observed during training. The work presented in this paper mitigates both these problems, resulting in an approach that has the potential to substantially he quality of live of people with limb loss.

OBJECTIVE:

This paper presents and evaluates the performance of four decoding methods for volitional movement intent from intramuscular EMG signals.

METHODS:

The decoders are trained using dataset aggregation (DAgger) algorithm, in which the training data set is augmented during each training iteration based on the decoded estimates from previous iterations. Four competing decoding methods: polynomial Kalman filters (KFs), multilayer perceptron (MLP) networks, convolution neural networks (CNN), and Long-Short Term Memory (LSTM) networks, were developed. The performance of the four decoding methods was evaluated using EMG data sets recorded from two human volunteers with transradial amputation. Short-term analyses, in which the training and cross-validation data came from the same data set, and long-term analyses training and testing were done in different data sets, were performed.

RESULTS:

Short-term analyses of the decoders demonstrated that CNN and MLP decoders performed significantly better than KF and LSTM decoders, showing an improvement of up to 60% in the normalized mean-square decoding error in cross-validation tests. Long-term analysis indicated that the CNN, MLP and LSTM decoders performed significantly better than KF-based decoder at most analyzed cases of temporal separations (0 to 150 days) between the acquisition of the training and testing data sets.

CONCLUSION:

The short-term and long-term performance of MLP and CNN-based decoders trained with DAgger, demonstrated their potential to provide more accurate and naturalistic control of prosthetic hands than alternate approaches.

PMID:
30835207
DOI:
10.1109/TBME.2019.2901882

Supplemental Content

Full text links

Icon for IEEE Engineering in Medicine and Biology Society Icon for Norwegian BIBSYS system
Loading ...
Support Center