Posts Tagged Brain Computer Interface

[ARTICLE] fNIRS-based Neurorobotic Interface for gait rehabilitation – Full Text

Abstract

Background

In this paper, a novel functional near-infrared spectroscopy (fNIRS)-based brain-computer interface (BCI) framework for control of prosthetic legs and rehabilitation of patients suffering from locomotive disorders is presented.

Methods

fNIRS signals are used to initiate and stop the gait cycle, while a nonlinear proportional derivative computed torque controller (PD-CTC) with gravity compensation is used to control the torques of hip and knee joints for minimization of position error. In the present study, the brain signals of walking intention and rest tasks were acquired from the left hemisphere’s primary motor cortex for nine subjects. Thereafter, for removal of motion artifacts and physiological noises, the performances of six different filters (i.e. Kalman, Wiener, Gaussian, hemodynamic response filter (hrf), Band-pass, finite impulse response) were evaluated. Then, six different features were extracted from oxygenated hemoglobin signals, and their different combinations were used for classification. Also, the classification performances of five different classifiers (i.e. k-Nearest Neighbour, quadratic discriminant analysis, linear discriminant analysis (LDA), Naïve Bayes, support vector machine (SVM)) were tested.

Results

The classification accuracies obtained from SVM using the hrf were significantly higher (p < 0.01) than those of the other classifier/ filter combinations. Those accuracies were 77.5, 72.5, 68.3, 74.2, 73.3, 80.8, 65, 76.7, and 86.7% for the nine subjects, respectively.

Conclusion

The control commands generated using the classifiers initiated and stopped the gait cycle of the prosthetic leg, the knee and hip torques of which were controlled using the PD-CTC to minimize the position error. The proposed scheme can be effectively used for neurofeedback training and rehabilitation of lower-limb amputees and paralyzed patients.

Background

Neurological disability due specifically to stroke or spinal cord injury can profoundly affect the social life of paralyzed patients [123]. The resultant gait impairment is a large contributor to ambulatory dysfunction [4]. In order to regain complete functional independence, physical rehabilitation remains the mainstay option, owing to the significant expense of health care and the redundancy of therapy sessions. Such devices are developed as alternatives to traditional, expensive and time-consuming exercises in busy daily life. In the past, similar training sessions on treadmills performed using robotic mechanisms have shown better functional outcomes [12567]. However, these devices have limitations particular to given research and clinical settings. Therefore, wearable upper- and lower-limb robotic devices have been developed [78], which are used to assist users by actuating joints to partial or complete movement using brain intentions, according to individual-patient needs.

To date, various noninvasive modalities including functional magnetic resonance imaging (fMRI), electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) have been used to acquire brain signals. fNIRS is a relatively new modality that detects brain intention with reference to changes in hemodynamic response. Its fewer artifacts, better spatial resolution and acceptable temporal resolution make it the choice for comprehensive and promising results in, for example, rehabilitation and mental task applications [91011121314151617181920]. The main brain-computer interface (BCI) challenge in this regard is to extract useful information from raw brain signals for control-command generation [212223]. Acquired signals are processed in the following four stages: preprocessing, feature extraction, classification, and command generation. In preprocessing, physiological and instrumental artifacts and noises are removed [2425]. After this filtration stage, feature extraction proceeds in order to gather useful information. Then, the extracted features are classified using different classifiers. Finally, the trained classifier is used to generate control commands based on a trained model [23]. Figure 1 shows a schematic of a BCI.

Fig. 1 Schematic of BCI

[…]

via fNIRS-based Neurorobotic Interface for gait rehabilitation | Journal of NeuroEngineering and Rehabilitation | Full Text

Advertisements

, , , , , , ,

Leave a comment

[ARTICLE] Application of P300 Event-Related Potential in Brain-Computer Interface – Full Text

Abstract

The primary purpose of this chapter is to demonstrate one of the applications of P300 event-related potential (ERP), i.e., brain-computer interface (BCI). Researchers and students will find the chapter appealing with a preliminary description of P300 ERP. This chapter also appreciates the importance and advantages of noninvasive ERP technique. In noninvasive BCI, the P300 ERPs are extracted from brain electrical activities [electroencephalogram (EEG)] as a signature of the underlying electrophysiological mechanism of brain responses to the external or internal changes and events. As the chapter proceeds, topics are covered on more relevant scholarly works about challenges and new directions in P300 BCI. Along with these, articles with the references on the advancement of this technique will be presented to ensure that the scholarly reviews are accessible to people who are new to this field. To enhance fundamental understanding, stimulation as well as signal processing methods will be discussed from some novel works with a comparison of the associated results. This chapter will meet the need for a concise and practical description of basic, as well as advanced P300 ERP techniques, which is suitable for a broad range of researchers extending from today’s novice to an experienced cognitive researcher.

1. Introduction

Human brain is the most complex organ of the body and it is at the center of the driving block of human nervous system. In fact, more than 100 billion nerve cells are interconnected to build the functionality of human brain. Such a complicated architecture allows the brain to control the body as well as carry out the executive functions, such as making reasons, processing thoughts, and planning for next tasks. Interestingly, electrophysiology and hemodynamic response are the two techniques that have been used to study this complex organ to understand the mechanism the brain applies to finish works. Typically, electrophysiological measurements are performed by placing electrodes or sensors on the biological tissue [12]. In neuroscience and neuro-engineering, the electrophysiological techniques are used for studying electrical properties by measuring the electrical activities of neurons in the form of electroencephalogram (EEG). EEG may be measured by two different approaches: invasive and noninvasive. Invasive procedures need a surgery to place the EEG sensor deep under the scalp. In comparison, noninvasive procedure places the electrodes on the scalp. One of the ways to study the brain is to stimulate it by presenting a paradigm.

The event-related potential (ERP) was first reported by Sutton [3]. An ERP is an electrophysiological response or electrocortical potentials triggered by a stimulation and firing of neurons. A specific psychological event or a sensor can be employed to generate the stimulation. In general, visual, auditory, and tactile are three major sources of ERP stimulation. For instance, ERP can be elicited by a surprise appearance of a character on a visual screen, or a “novel” tone presented over earphones, or by sudden pressing of a button by the subject, including myriad of other events. Presented stimulus generates a detectable but time-delayed electrical wave in EEG. EEG is recorded starting from the time of presenting the stimulus to the time when EEG settles down. Depending on the necessity, simple detection method such as ensemble averaging or advanced processes such as linear discriminant analysis or support vector machine algorithms are applied on EEG to measure the ERP. This chapter discusses the application of ERP in brain-computer interface (BCI) where P300 wave is of particular interest. ERP is time-locked to an event and appears as a series of positive and negative voltage fluctuation in the EEG that is referred to as P300 components.

2. P300 waveform

P300 is a form of visually evoked potential (VEP) and P300 ERP is embedded within the EEG signal recordable from the scalp of human brain. Depending on the components appearance following the eliciting event, the P300 can be divided into exogenous and endogenous. Early (exogenous) components are distributed over first 150 ms, whereas longer latency (endogenous) components elicit after 150 ms. Although the P300 positive deflection occurs in the EEG about 300 ms after an eliciting stimulus is delivered (which is the major reason it is termed as P300), latency can be within the range from 250 to 750 ms.

Although the actual origin of the P300 is still unclear, it is suggested that P300 is elicited by the decision making or learning that a rare event has occurred, and some things appear to be learned if and only if they are surprising [4]. The variable latency is associated with the difficulty of the decision making. In addition, the largest P300 responses are obtained over parietal zone of human head while it is attenuated with the electrodes that are gradually placed farther from this area.

To generate the P300 ERP, three different types of paradigms are being used: (1) single-stimulus, (2) oddball, and (3) three-stimulus paradigm. In each case, the subject is instructed to follow the occurrence of the target by pressing a button or mentally counting [5]. Figure 1 presents these paradigms [56]. The single-stimulus paradigm irregularly presents just one type of stimuli or target with zero occurrence of any other type of target. A typical oddball paradigm can be presented to the subject with a computer screen, a group of light-emitting diodes (LEDs), or other medium to generate a sequence of events that can be categorized into two classes: frequently presented standard (nontarget or irrelevant) and rarely presented target stimuli [7]. In an oddball paradigm, two events are presented with different probabilities in a random order, but only the irregular and rare event (the oddball event) embosses the P300 peak into the EEG about 300 ms after the stimulus onset. The three-stimulus paradigm is a modified oddball task which includes nontarget distractor (infrequent nontarget) stimuli in addition to target and standard stimuli. The distractor elicits P3a which is large over the frontal/central area [8]. In contrast, target elicits a P3b (P300), which is maximum over the parietal electrode sites. Though P3a and P3b are subcomponents of P300, P3a is dominant in the frontal/central lobe with a shorter latency and habituates faster [9].

media/F1.png

FIGURE 1.
Schematic account of three paradigms: single-stimulus (top), oddball (middle), and three-stimulus (bottom). Elicited ERP is presented at right (adapted from Ref. [5]).

[…]

Continue —>  Application of P300 Event-Related Potential in Brain-Computer Interface | InTechOpen

, , , , , ,

Leave a comment

[ARTICLE] Hemiparetic Stroke Rehabilitation Using Avatar and Electrical Stimulation Based on Non-invasive Brain Computer Interface – Full Text

Abstract
Brain computer interfaces (BCIs) have been employed in rehabilitation training for post-stroke patients. Patients in the chronic stage, and/or with severe paresis, are particularly challenging for conventional rehabilitation. We present results from two such patients who participated in BCI training with first-person avatar feedback. Five assessments were conducted to assess any behavioural changes after the intervention, including the upper extremity Fugl-Meyer assessment (UE-FMA) and 9 hole-peg test (9HPT). Patient 1 (P1) increased his UE-FMA score from 25 to 46 points after the intervention. He could not perform the 9HPT in the first session. After the 18th session, he was able to perform the 9HPT and reduced the time from 10 min 22 sec to 2 min 53 sec. Patient 2 (P2) increased her UE-FMA from 17 to 28 points after the intervention. She could not perform the 9HPT throughout the training session. However, she managed to complete the test in 17 min 17 sec during the post-assessment session.
These results show that the feasibility of this BCI approach with chronic patients with severe paresis, and further support the growing consensus that these types of tools might develop into a new paradigm for rehabilitation tool for stroke patients. However, the results are from only two chronic stroke patients. This approach should be furthe validated in broader randomized controlled studies involving more patients.

Download Full Text PDF

, , , , , , , , , ,

Leave a comment

[Abstract] Combined rTMS and virtual reality brain-computer interface training for motor recovery after stroke

Abstract

Objective. Combining repetitive transcranial magnetic stimulation (rTMS) with brain-computer interface (BCI) training can address motor impairment after stroke by down-regulating exaggerated inhibition from the contralesional hemisphere and encouraging ipsilesional activation. The objective was to evaluate the efficacy of combined rTMS+BCI, compared to sham rTMS+BCI, on motor recovery after stroke in subjects with lasting motor paresis. Approach. Three stroke subjects approximately one year post-stroke participated in three weeks of combined rTMS (real or sham) and BCI, followed by three weeks of BCI alone. Behavioral and electrophysiological differences were evaluated at baseline, after three weeks, and after six weeks of treatment. Main Results. Motor improvements were observed in both real rTMS+BCI and sham groups, but only the former showed significant alterations in inter-hemispheric inhibition in the desired direction and increased relative ipsilesional cortical activation from fMRI. In addition, significant improvements in BCI performance over time and adequate control of the virtual reality BCI paradigm were observed only in the former group. Significance. When combined, the results highlight the feasibility and efficacy of combined rTMS+BCI for motor recovery, demonstrated by increased ipsilesional motor activity and improvements in behavioral function for the real rTMS+BCI condition in particular. Our findings also demonstrate the utility of BCI training alone, as demonstrated by behavioral improvements for the sham rTMS+BCI condition. This study is the first to evaluate combined rTMS and BCI training for motor rehabilitation and provides a foundation for continued work to evaluate the potential of both rTMS and virtual reality BCI training for motor recovery after stroke.

Source: Combined rTMS and virtual reality brain-computer interface training for motor recovery after stroke – IOPscience

, , , , , ,

Leave a comment

[BLOG POST] Brain Computer Interfaces (That Translate Human Thought To Direct Action): Their Evolution And Future

a graphic depicting two brains connected to each other. Two human outlines are shown facing each other. The connection between the two brains is shown via dots.

In the last few years, we have read quite a bit about how technology has allowed our brain to control devices or objects around us without the use of limbs. (If you haven’t, you can read about some examples herehere, and here). Futurism.com, a great website that posts about how human potential can be maximized, has this infographic that explains the basics of Brain Computer Interfaces – the use of technology to translate human thoughts into machine commands. We are seeing the use of BCI more and more with prosthetic limbs but where does it end? Will we able to upload our memories straight from our brain to the cloud in the future? Sky is the limit when it comes to innovation through technology.

Read this infographic to know the types of Brain-Computer Interfaces, their origin, what they have in store for us in the future, and how they can bridge the gap between disabled and able-bodied. Text version of infographic is right below the image.

The Evolution of Brain Computer Interfaces. Text is available in the original post right below the image.

 

Imagine a world where machines can be controlled by thought alone. This is the promise of brain-computer interfaces (BCIs) – using computers to decode and translate human thoughts into machine commands. Here’s a look at the evolution of BCI technology, its current state, and future prospects.

Invasive: Signal-transmitting devices are implanted directly in the brain’s gray matter. This method produces the highest quality signals, but scar tissue build up can cause signal degradation.

Partially Invasive: Devices are implanted within the skull but not within the brain tissues. Produce higher quality signals than noninvasive techniques by circumventing the skull’s dampening effect on transmissions, and has less risk of scar tissue buildup.

Noninvasive: Involves simple wearables that register the EM transmissions of neurons, with no expensive or dangerous surgery needed. This technique is certainly easier, but suffers from poor resolution caused by the skull’s interference with signals.

A Short History of BCI

1924: German neuroscientist Hans Berger discovers neuroelectrical activity using electroencephalography (EEG).

1970: The Defense Advanced Research Projects Agency (DARPA) begins to explore the potential BCI applications of EEG technology.

1998: First brain implant produces high quality signals.

2005: A monkey’s brain is successfully used to control a robotic arm.

2014: Direct brain-to-brain communication achieved by transmitting EEG signals over the internet.

Types of Noninvasive BCI

  • Eye movement and pupil size oscillation
  • Electroencephalography
  • Magnetic resonance imaging and magnetoencephalography

Applications of BCI

  • Direct mental control of prosthetic limbs.
  • Neurogaming – interaction within video game and virtual reality environments without the need for clumsy interface.
  • Synthetic telepathy – the establishment of a direct mental connection or communications pathway between minds.
  • The use of BCI in tele-robotics will allow human operators to directly “link” with robotic machines. – granting us a new way to explore aliens worlds, handle dangerous materials, and perform remote surgery.
  • A wealth of new possibilities for interfacing with computers opens up – including linking to the internet, uploading memories to the cloud, etc.
    It will effectively erase the divide between the disabled and the able-bodied.

Sources:

National Academy of Engineering, Techradar, Brain Vision UK, PLOS ONE

This infographic was originally posted on futurism.com.

Source: Brain Computer Interfaces (That Translate Human Thought To Direct Action): Their Evolution And Future – Assistive Technology Blog

, , , , , , , ,

Leave a comment

[ARTICLE] Post-stroke Rehabilitation Training with a Motor-Imagery-Based Brain-Computer Interface (BCI)-Controlled Hand Exoskeleton: A Randomized Controlled Multicenter Trial – Full Text

Repeated use of brain-computer interfaces (BCIs) providing contingent sensory feedback of brain activity was recently proposed as a rehabilitation approach to restore motor function after stroke or spinal cord lesions. However, there are only a few clinical studies that investigate feasibility and effectiveness of such an approach. Here we report on a placebo-controlled, multicenter clinical trial that investigated whether stroke survivors with severe upper limb (UL) paralysis benefit from 10 BCI training sessions each lasting up to 40 min. A total of 74 patients participated: median time since stroke is 8 months, 25 and 75% quartiles [3.0; 13.0]; median severity of UL paralysis is 4.5 points [0.0; 30.0] as measured by the Action Research Arm Test, ARAT, and 19.5 points [11.0; 40.0] as measured by the Fugl-Meyer Motor Assessment, FMMA. Patients in the BCI group (n = 55) performed motor imagery of opening their affected hand. Motor imagery-related brain electroencephalographic activity was translated into contingent hand exoskeleton-driven opening movements of the affected hand. In a control group (n = 19), hand exoskeleton-driven opening movements of the affected hand were independent of brain electroencephalographic activity. Evaluation of the UL clinical assessments indicated that both groups improved, but only the BCI group showed an improvement in the ARAT’s grasp score from 0 [0.0; 14.0] to 3.0 [0.0; 15.0] points (p < 0.01) and pinch scores from 0.0 [0.0; 7.0] to 1.0 [0.0; 12.0] points (p < 0.01). Upon training completion, 21.8% and 36.4% of the patients in the BCI group improved their ARAT and FMMA scores respectively. The corresponding numbers for the control group were 5.1% (ARAT) and 15.8% (FMMA). These results suggests that adding BCI control to exoskeleton-assisted physical therapy can improve post-stroke rehabilitation outcomes. Both maximum and mean values of the percentage of successfully decoded imagery-related EEG activity, were higher than chance level. A correlation between the classification accuracy and the improvement in the upper extremity function was found. An improvement of motor function was found for patients with different duration, severity and location of the stroke.

Introduction

Motor imagery (Page et al., 2001), or mental practice, attracted considerable interest as a potential neurorehabilitation technique improving motor recovery following stroke (Jackson et al., 2001). According to the Guidelines for adult stroke rehabilitation and recovery (Winstein et al., 2016), mental practice may proof beneficial as an adjunct to upper extremity rehabilitation services (Winstein et al., 2016). Several studies suggest that motor imagery can trigger neuroplasticity in ipsilesional motor cortical areas despite severe paralysis after stroke (Grosse-Wentrup et al., 2011Shih et al., 2012Mokienko et al., 2013bSoekadar et al., 2015).

The effect of motor imagery on motor function and neuroplasticity has been demonstrated in numerous neurophysiological studies in healthy subjects. Motor imagery has been shown to activate the primary motor cortex (M1) and brain structures involved in planning and control of voluntary movements (Shih et al., 2012Mokienko et al., 2013a,bFrolov et al., 2014). For example, it was shown that motor imagery of fist clenching reduces the excitation threshold of motor evoked potentials (MEP) elicited by transcranial magnetic stimulation (TMS) delivered to M1 (Mokienko et al., 2013b).

As motor imagery results in specific modulations of brain electroencephalographic (EEG) signals, e.g., sensorimotor rhythms (SMR) (Pfurtscheller and Aranibar, 1979), it can be used to voluntarily control an external device, e.g., a robot or exoskeleton using a brain-computer interface (BCI) (Nicolas-Alonso and Gomez-Gil, 2012). Such system allowing for voluntary control of an exoskeleton moving a paralyzed limb can be used as an assistive device restoring lost function (Maciejasz et al., 2014). Besides receiving visual feedback, the user receives haptic and kinesthetic feedback which is contingent upon the imagination of a specific movement.

Several BCI studies involving this type of haptic and kinesthetic feedback have demonstrated improvements in clinical parameters of post-stroke motor recovery (Ramos-Murguialday et al., 2013Ang et al., 20142015Ono et al., 2014). The number of subjects with post-stroke upper extremity paresis included in these studies was, however, relatively low [from 12 (Ono et al., 2014) to 32 (Ramos-Murguialday et al., 2013) patients]. As BCI-driven external devices, a haptic knob (Ang et al., 2014), MIT-Manus (Ang et al., 2015), or a custom-made orthotic device (Ramos-Murguialday et al., 2013Ono et al., 2014) were used. Furthermore, several other studies reported on using BCI-driven exoskeletons in patients with post-stroke hand paresis (Biryukova et al., 2016Kotov et al., 2016Mokienko et al., 2016), but these reports did not test for clinical efficacy and did not include a control group. While very promising, it still remains unclear whether BCI training is an effective tool to facilitate motor recovery after stroke or other lesions of the central nervous system (CNS) (Teo and Chew, 2014).

Here we report a randomized and controlled multicenter study investigating whether 10 sessions of BCI-controlled hand-exoskeleton active training after subacute and chronic stroke yields a better clinical outcome than 10 sessions in which hand-exoskeleton induced passive movements were not controlled by motor imagery-related modulations of brain activity. Besides assessing the effect of BCI training on clinical scores such as the ARAT and FMMA, we tested whether improvements in the upper extremity function correlates with the patient’s ability to generate motor imagery-related modulations of EEG activity.[…]

Continue —> Frontiers | Post-stroke Rehabilitation Training with a Motor-Imagery-Based Brain-Computer Interface (BCI)-Controlled Hand Exoskeleton: A Randomized Controlled Multicenter Trial | Neuroscience

 

Figure 1. The subject flow diagram from recruitment through analysis (Consolidated Standards of Reporting Trials flow diagram).

, , , , , ,

Leave a comment

[WEB SITE] Stroke rehabilitation gets personalised and interactive – CORDIS

Stroke rehabilitation gets personalised and interactive

The significant socioeconomic costs of stroke coupled with the rise in Europe’s ageing population highlights the need for effective but affordable stroke rehabilitation programmes. EU researchers made considerable headway in this regard through novel rehabilitation paradigms.
Stroke rehabilitation gets personalised and interactive
Computer-mediated rehabilitation tools require a high degree of motor control and are therefore inadequate for patients with significant impairment in motor control. Consequently, many stroke survivors are unable to benefit. The REHABNET (REHABNET: Neuroscience based interactive systems for motor rehabilitation) project came up with an innovative approach to address this critical need.

Researchers successfully developed a hybrid brain-computer interface (BCI)-virtual reality (VR) system that assesses user capability and dynamically adjusts its difficulty level. This motor imagery-based BCI system is tailored to meet the needs of patients using a VR environment for game training coupled with neurofeedback through multimodal sensing technologies.

The game training scenarios address both cognitive and motor abilities. The four rehabilitation scenarios include bimanual motor training, dual motor cognitive-motor training and a simulated city for training on daily living activities.

Pilot and longitudinal studies demonstrated the benefits of longitudinal VR training as compared to existing rehabilitation regimens. The self-report questionnaires also revealed a high user acceptance of the novel system.

Designed for at-home use, the REHABNET toolset is platform-independent and freely available globally as an app (Reh@Mote). Besides deeper insight on factors affecting stroke recovery, this could aid in further improvement of rehabilitation strategies. More importantly, these low-cost toolsets could also address the needs of patients with severe motor and cognitive deficits. Efforts are ongoing to facilitate future commercial exploitation through a technology transfer agreement.

Related information

Source: European Commission : CORDIS : Projects and Results : Stroke rehabilitation gets personalised and interactive

, , , , , , , ,

Leave a comment

[Review] Review of devices used in neuromuscular electrical stimulation for stroke rehabilitation – PDF

Abstract

Neuromuscular electrical stimulation (NMES), specifically functional electrical stimulation (FES) that compensates for voluntary motion, and therapeutic electrical stimulation (TES) aimed at muscle strengthening and recovery from paralysis are widely used in stroke rehabilitation. The electrical stimulation of muscle contraction should be synchronized with intended motion to restore paralysis. Therefore, NMES devices, which monitor electromyogram (EMG) or electroencephalogram (EEG) changes with motor intention and use them as a trigger, have been developed. Devices that modify the current intensity of NMES, based on EMG or EEG, have also been proposed. Given the diversity in devices and stimulation methods of NMES, the aim of the current review was to introduce some commercial FES and TES devices and application methods, which depend on the condition of the patient with stroke, including the degree of paralysis.

Download Full Text PDF

, , , ,

1 Comment

[CORDIS Project] Motor Recovery with Paired Associative Stimulation (RecoveriX) – European Commission

Motor Recovery with Paired Associative Stimulation (RecoveriX)

Objective

Source: European Commission : CORDIS : Projects and Results : Motor Recovery with Paired Associative Stimulation (RecoveriX)

, , , , , , ,

Leave a comment

[ARTICLE] Personalized Brain-Computer Interface Models for Motor Rehabilitation – Full Text PDF

Abstract

We propose to fuse two currently separate research lines on novel therapies for stroke rehabilitation: brain-computer interface (BCI) training and transcranial electrical stimulation (TES). Specifically, we show that BCI technology can be used to learn personalized decoding models that relate the global configuration of brain rhythms in individual subjects (as measured by EEG) to their motor performance during 3D reaching movements. We demonstrate that our models capture substantial across-subject heterogeneity, and argue that this heterogeneity is a likely cause of limited effect sizes observed in TES for enhancing motor performance. We conclude by discussing how our personalized models can be used to derive optimal TES parameters, e.g., stimulation site and frequency, for individual patients.

I. INTRODUCTION
Motor deficits are one of the most common outcomes of stroke. According to the World Health Organization, 15 million people worldwide suffer a stroke each year. Of these, five million are permanently disabled. For this third, upper limb weakness and loss of hand function are among the most devastating types of disabilities, which affect the quality of their daily life [1]. Despite a wide range of rehabilitation therapies, including medication treatment [2], conventional physiotherapy [3], and robot physiotherapy [4], only approximately 20% of patients achieve some form of functional recovery in the first six months [5], [6].

Current research on novel therapies includes neurofeedback training based on brain-computer interface (BCI) technology and transcranial electrical stimulation (TES). The former approach attempts to support cortical reorganization by providing haptic feedback with a robotic exoskeleton that is congruent to movement attempts, as decoded in real-time from neuroimaging data [7], [8]. The latter type of research aims to reorganize cortical networks in a way that supports motor performance, because post-stroke alterations of cortical networks have been found to correlate with the severity of motor deficits [9], [10]. While initial evidence suggested that both approaches, BCIbased training [11] and TES [12], have a positive impact, the significance of these results over conventional physiotherapy was not always achieved by different studies [13], [14], [15].

One potential explanation for the difficulty to replicate the initially promising findings is the heterogeneity of stroke patients. Different locations of stroke-induced structural changes
are likely to result in substantial across-patient variance in the functional reorganization of cortical networks. As a result, not all patients may benefit from the same neurofeedback or stimulation protocol. We thus propose to fuse these two research themes and use BCI technology to learn personalized models that relate the configuration of cortical networks to each patient’s motor deficits. These personalized models may then be used to predict which TES parameters, e.g., spatial location and frequency band, optimally support rehabilitation in each individual patient.

In this study, we address the first step towards personalized TES for stroke rehabilitation. Using a transfer learning framework developed in our group [16], we show how to create personalized decoding models that relate the EEG of healthy subjects during a 3D reaching task to their motor performance in individual trials. We further demonstrate that the resulting decoding models capture substantial acrosssubject heterogeneity, thereby providing empirical support for the need to personalize models. We conclude by reviewing our findings in the light of TES studies to improve motor performance in healthy subjects, and discuss how personalized TES parameters may be derived from our models.[…]

Full Text PDF

, , , ,

Leave a comment

%d bloggers like this: