Posts Tagged Lw-CNN
[ARTICLE] Lw-CNN-Based Myoelectric Signal Recognition and Real-Time Control of Robotic Arm for Upper-Limb Rehabilitation – Full Text
Posted by Kostas Pantremenos in Paretic Hand, Rehabilitation robotics on January 4, 2021
Abstract
Deep-learning models can realize the feature extraction and advanced abstraction of raw myoelectric signals without necessitating manual selection. Raw surface myoelectric signals are processed with a deep model in this study to investigate the feasibility of recognizing upper-limb motion intents and real-time control of auxiliary equipment for upper-limb rehabilitation training. Surface myoelectric signals are collected on six motions of eight subjects’ upper limbs. A light-weight convolutional neural network (Lw-CNN) and support vector machine (SVM) model are designed for myoelectric signal pattern recognition. The offline and online performance of the two models are then compared. The average accuracy is (90 ± 5)% for the Lw-CNN and (82.5 ± 3.5)% for the SVM in offline testing of all subjects, which prevails over (84 ± 6)% for the online Lw-CNN and (79 ± 4)% for SVM. The robotic arm control accuracy is (88.5 ± 5.5)%. Significance analysis shows no significant correlation ( = 0.056) among real-time control, offline testing, and online testing. The Lw-CNN model performs well in the recognition of upper-limb motion intents and can realize real-time control of a commercial robotic arm.
1. Introduction
Upper-limb rehabilitation robots are an innovative approach to rehabilitation training which lend strength and promote recovery of the upper-limb motion functions in stroke sufferers without over-burdening medical personnel [1–3]. Many upper-limb rehabilitation robots have been developed in recent years. MIT-Manus [4], for example, can help stroke patients regain steady motion capability. MIME [5], T-WREX [6], and NEREBOT [7] can train upper-limb rehabilitation training motions over three degrees of freedom (3DOF). Previous researchers [8] proposed an exoskeleton robot also capable of 3DOF.
Training that is “active” rather than “passive” (i.e., that centers on the patients’ intended motions throughout a session rather than forcing them into a set regimen that does not individually vary) can significantly enhance the effects of training and improve patients’ rehabilitation experiences [9–11]. Currently, existing man-machine interactive interfaces for body motion intent recognition function are based on three modes: mechanical sensor signals [8], surface myoelectric (sEMG) signals [12], and biological EEG signals [13–15]. Mechanical sensors are accurate and reliable, but they only reflect lag motion information and thus are not conducive to real-time control. Man-machine interfaces based on sEMG signal processing have seen rapid and extensive advancements in recent years. Both sEMG signals and EEG signals reflect human motion intents. Non-instructive surface EMG technology effectively records the electric activities of muscles [16–18]. Unlike the “on-off” [19]or proportional control [16] strategies of traditional rehabilitation robots, patients’ sEMG signals are collected as their upper-limb motion intents are acquired via pattern recognition. This allows for natural and flexible interactions between the patient and the rehabilitation robot.
The support vector machine (SVM) [19], LDA [20], and Gaussian mixture models (GMM) [18] are widely applied for classification of robot-acquired signals. The performance of the technologies discussed above greatly depends on the feature selection of signals. In most cases, feature selection is conducted by researchers manually based on their professional experience, which is known as “feature engineering” [21–23]. Deep learning is a machine-learning method which has become a wildly popular research topic in recent years. It does not require manual feature extraction-feature extraction and advanced abstraction can instead be conducted automatically on raw signals [24, 25].
High-performance deep-learning methods such as the convolutional neural network (CNN) have been tested for various gesture recognition applications [26–28]. Existing sEMG signal collection technologies include sparse multichannel sEMG and high-density sEMG (HD-sEMG). HD-sEMG records both temporal and spatial changes in muscle activities via electrode arrays [29]; it can classify as many as eight distinct hand motions [30]. However, an increase in collection channels creates a dramatic increase in the computational burden, which results in an overly complex system that is incapable of real-time upper-limb rehabilitation performance. The sparse multichannel sEMG can recognize upper-limb motion intents while consuming fewer computational resources.
Many previous researchers have explored gesture recognition with deep-learning methods and various dexterous hand and artificial limb applications, but there have been relatively few studies on deep-learning upper-limb rehabilitation robotics technologies. Previous researchers [31, 32] used the traditional SVM algorithm for upper-limb motion intent recognition and applied it in upper-limb rehabilitation robots. Others [33] used a Back Propagation (BP) neural network as a classification model, but this required manual selection of energy and max values as inputs.
Many deep-learning models with different architectures have been proposed for sEMG signal recognition based on deep learning [34–37]. Training and verification are generally carried out on public or self-built datasets in an offline manner. Accuracy usually differs between online and offline recognition [38]; online recognition accuracy is lower than offline [32, 38–40]. High online recognition accuracy and good real-time performance are of great significance in terms of practical application in rehabilitation robots or artificial limbs. Researchers [41] have used Gaussian Naive Bayes (GNB) and SVM for myoelectric signal recognition; their models were verified both online and offline to realize the real-time control of a hand exoskeleton. Others [42] proposed an upper-limb prosthetic real-time control method based on the motor unit drive.
This study was conducted to test the feasibility of a multiple-DOF, real-time robotic arm using myoelectric pattern recognition for upper-limb rehabilitation training. A three-channel sparse electrode is used to collect raw sEMG signals of the deltoid, biceps brachii, and triceps brachii from an upper limb. The signals are then input into a Lw-CNN model for body motion intent recognition. Six rehabilitation motions were designed over the shoulder and elbow joints; then, a dataset was established based on the motions of eight volunteers. The offline trained model was deployed and verified through online recognition. A commercial robotic arm was also tested to preliminarily validate the real-time control performance of the proposed deep-learning model. The entire control course took 269 ms, satisfying the requirements for real-time control within 300 ms [43–46].
2. Materials and Methods
2.1. Subjects
Eight subjects (denoted S1–S8) participated in this experiment (Table 1). All the subjects were students of Hebei North University at the time of their participation. All completed a physical examination at the First Affiliated Hospital of Hebei North University and were issued health certificates before joining the experiment. They also signed a consent form to publish details and/or images.
2.2. Experimental Protocols
The upper-limb rehabilitation robot investigated in this study was designed to train certain motions in stroke patients’ elbow joints and shoulder joints. As shown in Figure 1, six motion modes including elbow flexion (EF), elbow extension (EE), shoulder flexion (SF), shoulder extension (SE), elbow & shoulder flexion (ESF), and elbow & shoulder extension (ESE) were designed accordingly.
During his or her interaction with the robot, the subject sat on a chair close to the table with the palm making a fist facing upwards, the forearm perpendicular with the upper arm, and the upper arm forming about a 20° angle with the body for the initial training posture, “EE,” as the upper arm was kept still. The motion of the forearm to the body side from EE is defined as “EF.” From this initial state, the shoulder joints were controlled to move as the upper arm is lifted for “SE.” For the “SF” motion, the upper arm fell from SE back to the initial posture. The “ESE” motion was defined by lifting the upper arm and straightening the forearm from the initial state. EF and EE only concerned motions of the one-degree-of-freedom elbow joint. SF and SE concerned simultaneous motions of elbow and shoulder joints. Subjects actively exerted forces in performing ESF and ESE to control the shoulder and elbow joints simultaneously. These two motions were compounds of the aforesaid four motions, including the abduction motion of upper arms, and involved in the multi-role of musculus biceps brachii, musculus triceps brachii, and deltoid. These six actions were presented as separate action types for identification.[…]

