[REVIEW] Brain–computer interface robotics for hand rehabilitation after stroke: a systematic review – Full Text

Abstract

Background

Hand rehabilitation is core to helping stroke survivors regain activities of daily living. Recent studies have suggested that the use of electroencephalography-based brain-computer interfaces (BCI) can promote this process. Here, we report the first systematic examination of the literature on the use of BCI-robot systems for the rehabilitation of fine motor skills associated with hand movement and profile these systems from a technical and clinical perspective.

Methods

A search for January 2010–October 2019 articles using Ovid MEDLINE, Embase, PEDro, PsycINFO, IEEE Xplore and Cochrane Library databases was performed. The selection criteria included BCI-hand robotic systems for rehabilitation at different stages of development involving tests on healthy participants or people who have had a stroke. Data fields include those related to study design, participant characteristics, technical specifications of the system, and clinical outcome measures.

Results

30 studies were identified as eligible for qualitative review and among these, 11 studies involved testing a BCI-hand robot on chronic and subacute stroke patients. Statistically significant improvements in motor assessment scores relative to controls were observed for three BCI-hand robot interventions. The degree of robot control for the majority of studies was limited to triggering the device to perform grasping or pinching movements using motor imagery. Most employed a combination of kinaesthetic and visual response via the robotic device and display screen, respectively, to match feedback to motor imagery.

Conclusion

19 out of 30 studies on BCI-robotic systems for hand rehabilitation report systems at prototype or pre-clinical stages of development. We identified large heterogeneity in reporting and emphasise the need to develop a standard protocol for assessing technical and clinical outcomes so that the necessary evidence base on efficiency and efficacy can be developed.

Background

There is growing interest in the use of robotics within the field of rehabilitation. This interest is driven by the increasing number of people requiring rehabilitation following problems such as stroke (with an ageing population), and the global phenomenon of insufficient numbers of therapists able to deliver rehabilitation exercises to patients [12]. Robotic systems allow a therapist to prescribe exercises that can then be guided by the robot rather than the therapist. An important principle within the use of such systems is that the robots assist the patient to actively undertake a prescribed movement rather than the patient’s limb being moved passively. This means that it is necessary for the system to sense when the patient is trying to generate the required movement (given that, by definition, the patient normally struggles with the action). One potential solution to this issue is to use force sensors that can detect when the patient is starting to generate the movement (at which point the robot’s motors can provide assistive forces). It is also possible to use measures of muscle activation (EMGs) to detect the intent to move [3]. In the last two decades there has been a concerted effort by groups of clinicians, neuroscientists and engineers to integrate robotic systems with brain signals correlated with a patient trying to actively generate a movement, or imagine a motor action, to enhance the efficacy and effectiveness of stroke rehabilitation- these systems fall under the definition of Brain Computer Interfaces, or BCIs [4].

BCIs allow brain state-dependent control of robotic devices to aid stroke patients during upper limb therapy. While BCIs in their general form have been in development for almost 50 years [5] and were theoretically made possible since the discovery of the scalp-recorded human electroencephalogram (EEG) in the 1920s [6], their application to rehabilitation is more recent [7,8,9]. Graimann et al. [10] defined a BCI as an artificial system that provides direct communication between the brain and a device based on the user’s intent; bypassing the normal efferent pathways of the body’s peripheral nervous system. A BCI recognises user intent by measuring brain activity and translating it into executable commands usually performed by a computer, hence the term “brain–computer interface”.

Most robotic devices used in upper limb rehabilitation exist in the form of exoskeletons or end-effectors. Robotic exoskeletons (i.e., powered orthoses, braces) are wearable devices where the actuators are biomechanically aligned with the wearer’s joints and linkages; allowing the additional torque to provide assistance, augmentation and even resistance during training [11]. In comparison, end-effector systems generate movement through applying forces to the most distal segment of the extremity via handles and attachments [11]. Rehabilitation robots are classified as Class II-B medical devices (i.e., a therapeutic device that administers the exchange of energy, mechanically, to a patient) and safety considerations are important during development [1213]. Most commercial robots are focused on arms and legs, each offering a unique therapy methodology. There is also a category of device that target the hand and finger. While often less studied than the proximal areas of the upper limb, hand and finger rehabilitation are core component in regaining activities of daily living (ADL) [14]. Many ADLs require dexterous and fine motor movements (e.g. grasping and pinching) and there is evidence that even patients with minimal proximal shoulder and elbow control can regain some hand capacity long-term following stroke [15].

The strategy of BCI-robot systems (i.e. systems that integrate BCI and robots into one unified system) in rehabilitation is to recognise the patient’s intention to move or perform a task via a neural or physiological signal, and then use a robotic device to provide assistive forces in a manner that mimics the actions of a therapist during standard therapy sessions [16]. The resulting feedback is patient-driven and is designed to aid in closing the neural loop from intention to execution. This process is said to promote use-dependent neuroplasticity within intact brain regions and relies on the repeated experience of initiating and achieving a specified target [1718]; making the active participation of the patient in performing the therapy exercises an integral part of the motor re-learning process [1920].

The aforementioned scalp-recorded EEG signal is a commonly used instrument for data acquisition in BCI systems because it is non-invasive, easy to use and can detect relevant brain activity with high temporal resolution [2122]. In principle, the recognition of motor imagery (MI), the imagination of movement without execution, via EEG can allow the control of a device independent of muscle activity [10]. It has been shown that MI-based BCI can discriminate motor intent by detecting event-related spectral perturbations (ERSP) [2324] and/or event-related desynchronisation/synchronisation (ERD/ERS) patterns in the µ (9–11 Hz) and β (14–30 Hz) sensorimotor rhythm of EEG signals [24]. However, EEG also brings with it some challenges. These neural markers are often concealed by various artifacts and may be difficult to recognise through the raw EEG signal alone. Thus, signal processing (including feature extraction and classification) is a vital part of obtaining a good MI signal for robotic control. A general pipeline for EEG data processing involves several steps. First, the data undergo a series of pre-processing routines (e.g., filtering and artifact removal) before feature extraction and classification for use as a control signal for the robotic hand. There are variety of methods to remove artifact from EEG and these choices depend on the overall scope of the work [25]. For instance, Independent Component Analysis (ICA) and Canonical Correlation Analysis (CCA) can support real-time applications but are dependent on manual input. In contrast, regression and wavelet methods are automated but support offline applications. There also exist automated and real-time applications such as adaptive filtering or using blind source separation (BSS) based methods. Recently, the research community has been pushing real-time artifact rejection by reducing computational complexity e.g. Enhanced Automatic Wavelet-ICA (EAWICA) [26], hybrid ICA—Wavelet transform technique (ICA-W) [27] or by developing new approaches such as adaptive de-noising frameworks [28] and Artifact Subspace Reconstruction (ASR) [29]. Feature extraction involves recognising useful information (e.g., spectral power, time epochs, spatial filtering) for better discriminability among mental states. For example, the common spatial patterns (CSP) algorithm is a type of spatial filter that maximises the variance of band pass-filtered EEG from one class to discriminate it from another [30]. Finally, classification (which can range from linear and simple algorithms such as Linear Discriminant Analysis (LDA), Linear Support Vector Machine (L-SVM) up to more complex techniques in deep learning such as Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) [3132] involves the translation of these signals of intent to an action that provides the user feedback and closes the loop of the motor intent-to-action circuit.

The potential of MI-based BCIs has gained considerable attraction because the neural activity involved in the control of the robotic device may be a key component in the rehabilitation itself. For example, MI of movement is thought to activate some of the neural networks involved in movement execution (ME) [33,34,35,36]. The resulting rationale is that encouraging the use of MI could increase the capacity of the motor cortex to control major muscle movements and decrease the necessity to use neural circuits damaged post-stroke. The scientific justification for this approach was first provided by Jeannerod [36] who suggested that the neural substrates of MI are part of a shared network that is also activated during the simulation of action by the observation of action (AO) [36]. These ‘mirror neuron’ systems are thought to be an important component of motor control and learning [36]—hence the belief that activating these systems could aid rehabilitation. The use of a MI-BCI to control a robot in comparison to traditional MI and physical practice provides a number of benefits to its user and the practitioner. These advantages include the fact that the former can provide a more streamlined approach such as sensing physiological states, automating visual and/or kinaesthetic feedback and enriching the task and increasing user motivation through gamification. There are also general concerns around the utility of motor imagery without physical movement (and the corresponding muscle development that comes from these) and it is possible that these issues could be overcome through a control strategy that progressively reduces the amount of support provided by the MI-BCI system and encourages active motor control [3738].

A recent meta-analysis of the neural correlates of action (MI, AO and ME) quantified ‘conjunct’ and ‘contrast’ networks in the cortical and subcortical regions [33]. This analysis, which took advantage of open-source historical data from fMRI studies, reported consistent activation in the premotor, parietal and somatosensory areas for MI, AO and ME. Predicated on such data, researchers have reasoned that performing MI should cause activation of the neural substrates that are also involved in controlling movement and there have been a number of research projects that have used AO in combination with MI in neurorehabilitation [39,40,41] and motor learning studies [4243] over the last decade.

One implication of using MI and AO to justify the use of BCI approaches is that great care must be taken with regard to the quality of the environment in which the rehabilitation takes place. While people can learn to modulate their brain rhythms without using motor imagery and there is variability across individuals in their ability to imagine motor actions, MI-driven BCI systems require (by design at least) for patient to imagine a movement. Likewise, AO requires the patients to clearly see the action. This suggests that the richness and vividness of the visual cues provided is an essential part of an effective BCI system. It is also reasonable to assume that feedback is important within these processes and thus the quality of feedback should be considered as essential. Afterall, MI and AO are just tools to modulate brain states [40] and the effectiveness of these tools vary from one stroke patient to another [44]. Finally, motivation is known to play an important role in promoting active participation during therapy [2045]. Thus, a good BCI system should incorporate an approach (such as gaming and positive reward) that increases motivation. Recent advances in technology make it far easier to create a rehabilitation environment that provides rich vivid cues, gives salient feedback and is motivating. For example, the rise of immersive technologies, including virtual reality (VR) and augmented reality (AR) platforms [45,46,47], allows for the creation of engaging visual experiences that have the potential to improve a patient’s self-efficacy [48] and thereby encourage the patient to maintain the rehabilitation regime. One specific example of this is visually amplifying the movement made by a patient when the movement is of limited extent so that the patient can see their efforts are producing results [49].

In this review we set out to examine the literature to achieve a better understanding of the current value and potential of BCI-based robotic therapy with three specific objectives:

  1. (1)Identify how BCI technologies are being utilised in controlling robotic devices for hand rehabilitation. Our focus was on the study design and the tasks that are employed in setting up a BCI-hand robot therapy protocol.
  2. (2)Document the readiness of BCI systems. Because BCI for rehabilitation is still an emerging field of research, we expected that most studies would be in their proof-of-concept or clinical testing stages of development. Our purpose was to determine the limits of this technology in terms of: (a) resolution of hand MI detection and (b) the degree of robotic control.
  3. (3)Evaluate the clinical significance of BCI-hand robot systems by looking at the outcome measures in motor recovery and determine if a standard protocol exists for these interventions.

It is important to note that there have been several recent reviews exploring BCI for stroke rehabilitation. For example, Monge-Pereira et al. [50] compiled EEG-based BCI studies for upper limb stroke rehabilitation. Their systematic review (involving 13 clinical studies on stroke and hemiplegic patients) reported on research methodological quality and improvements in the motor abilities of stroke patients. Cervera et al. [51] performed a meta-analysis on the clinical effectiveness of BCI-based stroke therapy among 9 randomised clinical trials (RCT). McConnell et al. [52] undertook a narrative review of 110 robotic devices with brain–machine interfaces for hand rehabilitation post-stroke. These reviews, in general, have reported that such systems provide improvements in both functional and clinical outcomes in pilot studies or trials involving small sample sizes. Thus, the literature indicates that EEG-based BCI are a promising general approach for rehabilitation post-stroke. The current work complements these previous reports by providing the first systematic examination on the use of BCI-robot systems for the rehabilitation of fine motor skills associated with hand movement and profiling these systems from a technical and clinical perspective.[…]

Continue

figure3
Robotic hand rehabilitation devices: a An end-effector device (Haptic Knob) used in one of the extracted studies [75111], b a wearable hand exoskeleton/orthosis

, , , , , , , , , , ,

  1. Leave a comment

Leave a comment