[ARTICLE] Detection of movement onset using EMG signals for upper-limb exoskeletons in reaching tasks – Full Text

Abstract

Background

To assist people with disabilities, exoskeletons must be provided with human-robot interfaces and smart algorithms capable to identify the user’s movement intentions. Surface electromyographic (sEMG) signals could be suitable for this purpose, but their applicability in shared control schemes for real-time operation of assistive devices in daily-life activities is limited due to high inter-subject variability, which requires custom calibrations and training. Here, we developed a machine-learning-based algorithm for detecting the user’s motion intention based on electromyographic signals, and discussed its applicability for controlling an upper-limb exoskeleton for people with severe arm disabilities.

Methods

Ten healthy participants, sitting in front of a screen while wearing the exoskeleton, were asked to perform several reaching movements toward three LEDs, presented in a random order. EMG signals from seven upper-limb muscles were recorded. Data were analyzed offline and used to develop an algorithm that identifies the onset of the movement across two different events: moving from a resting position toward the LED (Go-forward), and going back to resting position (Gobackward). A set of subject-independent time-domain EMG features was selected according to information theory and their probability distributions corresponding to rest and movement phases were modeled by means of a two-component Gaussian Mixture Model (GMM). The detection of movement onset by two types of detectors was tested: the first type based on features extracted from single muscles, whereas the second from multiple muscles. Their performances in terms of sensitivity, specificity and latency were assessed for the two events with a leave one-subject out test method.

Results

The onset of movement was detected with a maximum sensitivity of 89.3% for Go-forward and 60.9% for Go-backward events. Best performances in terms of specificity were 96.2 and 94.3% respectively. For both events the algorithm was able to detect the onset before the actual movement, while computational load was compatible with real-time applications.

Conclusions

The detection performances and the low computational load make the proposed algorithm promising for the control of upper-limb exoskeletons in real-time applications. Fast initial calibration makes it also suitable for helping people with severe arm disabilities in performing assisted functional tasks.

Background

Exoskeletons are wearable robots exhibiting a close physical and cognitive interaction with the human users. Over the last years, several exoskeletons have been developed for different purposes, such as augmenting human strength [1], rehabilitating neurologically impaired individuals [2] or assisting people affected by many neuro-musculoskeletal disorders in activities of daily life [3]. For all these applications, the design of cognitive Human-Robot Interfaces (cHRIs) is paramount [4]; indeed, understanding the users’ intention allows to control the device with the final goal to facilitate the execution of the intended movement. The flow of information from the human user to the robot control unit is particularly crucial when exoskeletons are used to assist people with compromised movement capabilities (e.g. post-stroke or spinal-cord-injured people), by amplifying their movements with the goal to restore functions.

In recent years, different approaches have been pursued to design cHRIs, based on invasive and non-invasive approaches. Implantable electrodes, placed directly into the brain or other electrically excitable tissues, record signals directly from the peripheral or central nervous system or muscles, with high resolution and high precision [5]. Non-invasive approaches exploit different bio-signals: some examples are electroencephalography (EEG) [6], electrooculography (EOG) [7], and brain-machine interfaces (BMI) combining the two of them [8910]. In addition, a well-consolidated non-invasive approach is based on surface electromyography (sEMG) [11], which has been successfully used for controlling robotic prostheses and exoskeletons due to their inherent intuitiveness and effectiveness [121314]. Compared to EEG signals, sEMG signals are easy to be acquired and processed and provide effective information on the movement that the person is executing or about to start executing. Despite the above-mentioned advantages, the use of surface EMG signals still has several drawbacks, mainly related to their time-varying nature and the high inter-subject variability, due to differences in the activity level of the muscles and in their activation patterns [1115], which requires custom calibrations and specific training for each user [16]. For these reasons, notwithstanding the intuitiveness of EMG interfaces, it is still under discussion their efficacy and usability in shared human-machine control schemes for upper-limb exoskeletons. Furthermore, the need for significant signal processing can limit the use of EMG signals in on-line applications, for which fast detection is paramount. In this scenario, machine learning methods have been employed to recognize the EMG onset in real time, using different classifiers such as Support Vector Machines, Linear Discriminant Analysis, Hidden Markov Models, Neural Networks, Fuzzy Logic and others [151617]. In this process, a set of features is previously selected in time, frequency, or time-frequency domains [18]. Time-domain features extract information associated to signal amplitude in non-fatiguing contractions; when fatigue effects are predominant, frequency-domain features are more representative; finally, time-frequency domain features better elicit transient effects of muscular contractions. Before feeding the features into the classifier, dimensionality reduction is usually performed, to increase classification performances while reducing complexity [19]. The most common strategies for reduction are: i) feature projection, to map the set of features into a new set with reduced dimensionality (e.g., linear mapping through Principal Component Analysis); ii) feature selection, in which a subset of features is selected according to specific criteria, aimed at optimizing a chosen objective function. All the above-mentioned classification approaches ensure good performance under controlled laboratory conditions. Nevertheless, in order to be used effectively in real-life scenarios, smart algorithms must be developed, which are able to adapt to changes in the environmental conditions and intra-subject variability (e.g. changes of background noise level of the EMG signals), as well as to the inter-subject variability [20].

In this paper, we exploited a cHRI combining sEMG and an upper-limb robotic exoskeleton, to fast detect the users’ motion intention. We implemented offline an unsupervised machine-learning algorithm, using a set of subject-independent time-domain EMG features, selected according to information theory. The probability distributions of rest and movement phases of the set of features were modelled by means of a two-component Gaussian Mixture Model (GMM). The algorithm simulates an online application and implements a sequential method to adapt GMM parameters during the testing phase, in order to deal with changes of background noise levels during the experiment, or fluctuations in EMG peak amplitudes due to muscle adaptation or fatigue. Features were extracted from two different signal sources, namely onset detectors, which were tested offline and their performance in terms of sensitivity (or true positive rate), specificity (or true negative rate) and latency (delay on onset detection) were assessed for two different events, i.e. two transitions from rest to movement phases at different initial conditions. The two events were selected in order to replicate a possible application scenario of the proposed system. Based on the results we obtained, we discussed the applicability of the algorithm to the control of an upper-limb exoskeleton used as an assistive device for people with severe arm disabilities.

Materials and methods

Experimental setup

The experimental setup includes: (i) an upper-limb powered exoskeleton (NESM), (ii) a visual interface, and (iii) a commercial EMG recording system (TeleMyo 2400R, Noraxon Inc., AZ, US).

NESM upper-limb exoskeleton

NESM (Fig. 1a) is a shoulder-elbow powered exoskeleton designed for the mobilization of the right upper limb [2122], developed at The BioRobotics Institute of Scuola Superiore Sant’Anna (Italy). The exoskeleton mechanical structure hangs from a standing structure and comprises four active and eight passive degrees of freedom (DOFs), along with different mechanisms for size regulations to improve comfort and wearability of the device.
Fig. 1

Fig. 1a Experimental setup, comprising NESM, EMG electrodes and the visual interface; b Location of the electrodes for EMG acquisition; c Timing and sequence of action performed by the user during a single trial

[…]

Continue —-> Detection of movement onset using EMG signals for upper-limb exoskeletons in reaching tasks | Journal of NeuroEngineering and Rehabilitation | Full Text

, , , , , , , , , ,

  1. Leave a comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: