Posts Tagged Brain Computer Interface

[ARTICLE] Personalized Brain-Computer Interface Models for Motor Rehabilitation – Full Text PDF

Abstract

We propose to fuse two currently separate research lines on novel therapies for stroke rehabilitation: brain-computer interface (BCI) training and transcranial electrical stimulation (TES). Specifically, we show that BCI technology can be used to learn personalized decoding models that relate the global configuration of brain rhythms in individual subjects (as measured by EEG) to their motor performance during 3D reaching movements. We demonstrate that our models capture substantial across-subject heterogeneity, and argue that this heterogeneity is a likely cause of limited effect sizes observed in TES for enhancing motor performance. We conclude by discussing how our personalized models can be used to derive optimal TES parameters, e.g., stimulation site and frequency, for individual patients.

I. INTRODUCTION
Motor deficits are one of the most common outcomes of stroke. According to the World Health Organization, 15 million people worldwide suffer a stroke each year. Of these, five million are permanently disabled. For this third, upper limb weakness and loss of hand function are among the most devastating types of disabilities, which affect the quality of their daily life [1]. Despite a wide range of rehabilitation therapies, including medication treatment [2], conventional physiotherapy [3], and robot physiotherapy [4], only approximately 20% of patients achieve some form of functional recovery in the first six months [5], [6].

Current research on novel therapies includes neurofeedback training based on brain-computer interface (BCI) technology and transcranial electrical stimulation (TES). The former approach attempts to support cortical reorganization by providing haptic feedback with a robotic exoskeleton that is congruent to movement attempts, as decoded in real-time from neuroimaging data [7], [8]. The latter type of research aims to reorganize cortical networks in a way that supports motor performance, because post-stroke alterations of cortical networks have been found to correlate with the severity of motor deficits [9], [10]. While initial evidence suggested that both approaches, BCIbased training [11] and TES [12], have a positive impact, the significance of these results over conventional physiotherapy was not always achieved by different studies [13], [14], [15].

One potential explanation for the difficulty to replicate the initially promising findings is the heterogeneity of stroke patients. Different locations of stroke-induced structural changes
are likely to result in substantial across-patient variance in the functional reorganization of cortical networks. As a result, not all patients may benefit from the same neurofeedback or stimulation protocol. We thus propose to fuse these two research themes and use BCI technology to learn personalized models that relate the configuration of cortical networks to each patient’s motor deficits. These personalized models may then be used to predict which TES parameters, e.g., spatial location and frequency band, optimally support rehabilitation in each individual patient.

In this study, we address the first step towards personalized TES for stroke rehabilitation. Using a transfer learning framework developed in our group [16], we show how to create personalized decoding models that relate the EEG of healthy subjects during a 3D reaching task to their motor performance in individual trials. We further demonstrate that the resulting decoding models capture substantial acrosssubject heterogeneity, thereby providing empirical support for the need to personalize models. We conclude by reviewing our findings in the light of TES studies to improve motor performance in healthy subjects, and discuss how personalized TES parameters may be derived from our models.[…]

Full Text PDF

, , , ,

Leave a comment

[WEB SITE] Neuroprosthetics: Recovering from injury using the power of your mind

Neuroprosthetics, also known as brain-computer interfaces, are devices that help people with motor or sensory disabilities to regain control of their senses and movements by creating a connection between the brain and a computer. In other words, this technology enables people to move, hear, see, and touch using the power of thought alone. How do neuroprosthetics work? We take a look at five major breakthroughs in this field to see how far we have come – and how much farther we can go – using just the power of our minds.
woman with electrodes attached to skull]

Using electrodes, a computer, and the power of thought, neuroprosthetic devices can help patients with motor or sensory difficulties to move, feel, hear, and see.

Every year, hundreds of thousands of people worldwide lose control of their limbs as a result of an injury to their spinal cord. In the United States, up to 347,000 people are living with spinal cord injury (SCI), and almost half of these people cannot move from the neck down.

For these people, neuroprosthetic devices can offer some much-needed hope.

Brain-computer interfaces (BCI) usually involve electrodes – placed on the human skull, on the brain’s surface, or in the brain’s tissue – that monitor and measure the brain activity that occurs when the brain “thinks” a thought. The pattern of this brain activity is then “translated” into a code, or algorithm, which is “fed” into a computer. The computer, in turn, transforms the code into commands that produce movement.

Neuroprosthetics are not just useful for people who cannot move their arms and legs; they also help those with sensory disabilities. The World Health Organization (WHO) estimate that approximately 360 million people across the globe have a disabling form of hearing loss, while another 39 million people are blind.

For some of these people, neuroprosthetics such as cochlear implants and bionic eyes have given them back their senses and, in some cases, they have enabled them to hear or see for the very first time.

Here, we review five of the most significant developments in neuroprosthetic technology, looking at how they work, why they are helpful, and how some of them will develop in the future.

Ear implant

Probably the “oldest” neuroprosthetic device out there, cochlear implants (or ear implants) have been around for a few decades and are the epitome of successful neuroprosthetics.

The U.S. Food and Drug Administration (FDA) approved cochlear implants as early as 1980, and by 2012, almost 60,000 U.S. individuals had had the implant. Worldwide, more than 320,000 people have had the device implanted.

A cochlear implant works by bypassing the damaged parts of the ear and stimulating the auditory nerve with signals obtained using electrodes. The signals relayed through the auditory nerve to the brain are perceived as sounds, although hearing through an ear implant is quite different from regular hearing.

Although imperfect, cochlear implants allow users to distinguish speech in person or over the phone, with the media abound with emotional accounts of people who were able to hear themselves for the first time using this sensory neuroprosthetic device.

Here, you can watch a video of a 29-year-old woman who hears herself for the first time using a cochlear implant:

Eye implant

The first artificial retina – called the Argus II – is made entirely from electrodes implanted in the eye and was approved by the FDA in February 2013. In much the same way as the cochlear implant, this neuroprosthetic bypasses the damaged part of the retina and transmits signals, captured by an attached camera, to the brain.

This is done by transforming the images into light and dark pixels that get turned into electrical signals. The electrical signals are then sent to the electrodes, which, in turn, send the signal to the brain’s optic nerve.

While Argus II does not restore vision completely, it does enable patients with retinitis pigmentosa – a condition that damages the eye’s photoreceptors – to distinguish contours and shapes, which, many patients report, makes a significant difference in their lives.

Retinitis pigmentosa is a neurodegenerative disease that affects around 100,000 people in the U.S. Since its approval, more than 200 patients with retinitis pigmentosa have had the Argus II implant, and the company that designed it is currently working to make color detection possible as well as improve the resolution of the device.

Neuroprosthetics for people with SCI

Almost 350,000 people in the U.S. are estimated to live with SCI, and 45 percent of those who had an SCI since 2010 are considered tetraplegic – that is, paralyzed from the neck down.

At Medical News Today, we recently reported on a groundbreaking one-patient experiment that enabled a man with quadriplegia to move his arms using the sheer power of his thoughts.

Bill Kochevar had electrodes surgically fitted into his brain. After training the BCI to “learn” the brain activity that matched the movements he thought about, this activity was turned into electrical pulses that were then transmitted back to the electrodes in his brain.

In much the same way that the cochlear and visual implants bypass the damaged area, so too does this BCI area avoid the “short circuit” between the brain and the patient’s muscles created by SCI.

With the help of this neuroprosthetic, the patient was able to successfully drink and feed himself. “It was amazing,” Kochevar says, “because I thought about moving my arm and it did.” Kochevar was the first patient in the world to test the neuroprosthetic device, which is currently only available for research purposes.

You can learn more about this neuroprosthetic from the video below:

However, this is not where SCI neuroprosthetics stop. The Courtine Lab – which is led by neuroscientist Gregoire Courtine in Lausanne, Switzerland – is tirelessly working to help injured people to regain control of their legs. Their research efforts with rats have enabled paralyzed rodents to walk, achieved by using electrical signals and making them stimulate nerves in the severed spinal cord.

“We believe that this technology could one day significantly improve the quality of life of people confronted with neurological disorders,” says Silvestro Micera, co-author of the experiment and neuroengineer at Courtine Labs.

Recently, Prof. Courtine has also led an international team of researchers to successfully create voluntary leg movement in rhesus monkeys. This was the first time that a neuroprosthetic was used to enable walking in nonhuman primates.

However, “it may take several years before all the components of this intervention can be tested in people,” Prof. Courtine says.

An arm that feels

Silvestro Micera has also led other projects on neuroprosthetics, among which is the arm that “feels.” In 2014, MNT reportedon the first artificial hand that was enhanced with sensors.

Researchers measured the tension in the tendons of the artificial hand that control grasping movements and turned it into electric current. In turn, using an algorithm, this was translated into impulses that were then sent to the nerves in the arm, producing a sense of touch.

Since then, the prosthetic arm that “feels” has been improved even more. Researchers from the University of Pittsburgh and the University of Pittsburgh Medical Center, both in Pennsylvania, tested the BCI on a single patient with quadriplegia: Nathan Copeland.

The scientists implanted a sheath of microelectrodes below the surface of Copeland’s brain – namely, in his primary somatosensory cortex – and connected them to a prosthetic arm that was fitted with sensors. This enabled the patient to feel sensations of touch, which felt, to him, as though they belonged to his own paralyzed hand.

While blindfolded, Copeland was able to identify which finger on his prosthetic arm was being touched. The sensations he perceived varied in intensity and were felt as differing in pressure. 

Neuroprosthetics for neurons?

We have seen that brain-controlled prosthetics can restore patients’ sense of touch, hearing, sight, and movement, but could we build prosthetics for the brain itself?

Researchers from the Australian National University (ANU) in Canberra managed to artificially grow brain cells and create functional brain circuits, paving the way for neuroprosthetics for the brain.

By applying nanowire geometry to a semiconductor wafer, Dr. Vini Gautam, of ANU’s Research School of Engineering, and colleagues came up with a scaffolding that allows brain cells to grow and connect synaptically.

Project group leader Dr. Vincent Daria, from the John Curtin School of Medical Research in Australia, explains the success of their research:

We were able to make predictive connections between the neurons and demonstrated them to be functional with neurons firing synchronously. This work could open up a new research model that builds up a stronger connection between materials nanotechnology with neuroscience.”

Neuroprosthetics for the brain might one day help patients who have experienced a stroke or who live with neurodegenerative diseases to recover neurologically.

Every year in the U.S., almost 800,000 people have had a stroke, and more than 130,000 people die from it. Neurodegenerative diseases are also widespread, with 5 million U.S. adults estimated to live with Alzheimer’s disease, 1 million to have Parkinson’s, and 400,000 to experience multiple sclerosis.

Learn about Facebook’s newest endeavour: the development of BCIs.

Source: Neuroprosthetics: Recovering from injury using the power of your mind – Medical News Today

, , , , , , ,

Leave a comment

[BLOG POST] Facebook’s next frontier: Brain-computer interfaces

Facebook’s tech development team are currently working on a way for users to type with their minds, without the need for an invasive implant. Updating your status with thoughts alone may one day become a reality.
[Brain plugged in with wires]

Brain-computer interfaces are entering a brave new era.

The social media company’s 60-strong team hopes to achieve this miraculous feat using optical imaging that scans the brain hundreds of times per second, detecting our silent internal dialogues and translating them into text on a screen.

They hope that, eventually, the technology will allow users to type at 100 words per minute – five times faster than typing on a phone.

If this innovation comes to pass, it will be fascinating for Facebook’s following. There will, however, be deeper and more profound ramifications for people who do not have full use of their limbs.

Brain-computer interfaces (BCIs) that allow users to type with their minds are already available, but they are either slow or require a sensor to be implanted in the brain. This procedure is expensive, risky, and not likely to be adopted by the population at large.

If so-called brain typing could be perfected without the need for intrusive implants, it would be a genuine game-changer with a whole host of applications.

BCIs, then and now

The first steps toward developing a BCI came with Hans Berger’s discovery that the brain was electrically active. Each time an individual nerve cell sends a message, it is accompanied by a tiny electrical signal that nips from neuron to neuron.

This electrical signal can be picked up outside of the skull using an electroencephalogram (EEG). Berger was the first person to record human brain activity using an EEG, having achieved this feat almost a century ago, in 1924.

The term “brain-computer interface” was coined in the 1970s, in papers written by scientists from the University of California-Los Angeles. The research was led by Jacques Vidal, who is now considered the grandfather of BCI.

Can these observable electrical brain signals be put to work as carriers of information in man-computer communication or for the purpose of controlling such external apparatus as prosthetic devices or spaceships?”

Jacques Vidal, “Toward direct brain-computer communication,” 1973

Of course, animal studies were the first port of call when investigating BCIs. Research in the late 1960s and early 1970s proved that monkeys could learn to control the firing rates of single neurons or groups of neurons in the primary motor cortex if they were given a reward. Similarly, using operant conditioning, dogs could be trained to control the rhythms in their hippocampus.

These early studies showed that the electrical output of the brain could be measured and manipulated. Over the past two decades, there has been a surge of interest in BCIs. There is still a long way to go, but there have been notable successes.

In modern BCIs, the cream of the experimental crop is a recently designed system from Stanford University. Two aspirin-sized implants, inserted into an individual’s brain, chart the activity of the motor cortex – a region that controls muscles. Algorithms then interpret this activity and convert it into cursor movements on a screen.

In a recent study, one participant was able to type 39 characters (around eight words) per minute. “This study reports the highest speed and accuracy, by a factor of three, over what’s been shown before,” says Krishna Shenoy, one of the senior authors.

Invasive, semi-invasive, and noninvasive

Broadly speaking, modern BCIs are split into three groups. These are:

  • Invasive BCIs: Implants are placed directly into the brain. Software is trained to interpret a subject’s brain activity. For instance, a computer cursor can be controlled by a participant’s thoughts of “left,” “right,” “up,” and “down.” With enough practice, a user can draw shapes on a screen, control a television, and open computer programs.
  • Semi-invasive BCIs: This type of device is implanted inside the skull but does not sit within the gray matter itself. Although less invasive than an invasive BCI, implants left under the skull for long periods of time tend to form scar tissue in the gray matter, which, eventually, blocks the signals and renders them unusable.
  • Noninvasive BCIs: These work on the same principle, but do not involve surgical implantation and have, therefore, received the most research.

Of the noninvasive BCIs, the most common type are EEG-based BCIs. These read the electrical activity of the brain from outside of the body. However, because the skull scatters the electrical signals substantially, making them accurate is a real challenge. Added to this issue, they often take a fair amount of calibration before each use. That being said, there have been some significant steps forward over recent years.

For instance, some researchers have recently investigated noninvasive BCIs as a way to help individuals with amyotrophic lateral sclerosis and brain stem stroke. These patients can become “locked in,” meaning that they lose the use of all voluntary muscles and, as such, have no way to communicate, despite being cognitively “normal.”

Their studies led them to conclude that “BCI use may be of benefit to those with locked-in syndrome.”

How do noninvasive BCIs work?

BCI technology is based on detecting electrical activity emanating from the brain and then converting it into an external action. However, through the cacophony of neural noise, which signals should be paid attention to?

There are a number of signal types that noninvasive BCIs use, the most popular of which is the P300 event-related potential.

An event-related potential is a measurable brain response to a particular stimulus – specifically, the P300 is produced during decision-making and it is usually elicited experimentally using the so-called oddball paradigm.

[EEG cap on woman]

BCIs are based on converting brain activity into external action.

In the oddball paradigm, participants are presented with a range of symbols, flashed in front of their eyes one by one.

They are asked to look out for a specific symbol that occurs only rarely within the selection. When the target symbol is noticed by the participant, it triggers a P300 wave.

Over many trials, it is possible to distinguish the P300 from other electrical signals; it is easiest to observe emanating from the parietal lobe, a part of the brain responsible, in part, for integrating sensory information.

Once an algorithm is trained to recognize an individual’s P300, it can, from then on, understand what they are looking for. For instance, if the user is typing a word and they wish to start with the letter “a,” when that letter appears on the screen, a P300 will be generated by the brain, the software will recognize it, and the letter “a” is typed on the screen.

Compared with other similar methods, P300s are relatively fast, require little training (hours rather than days), and are effective for most users.

However, there are still shortfalls. Because the system needs to pick up a user’s response to individual characters, it has to run through a list before it can find the right one. This means that there is a limit to how fast one can type.

There are ways to minimize this wait, but the time taken is still longer than researchers (and users) would like.

How will Facebook achieve 100 words per minute?

To make a system that can type tens of words per minute, a new step in the process will be needed – in fact, an entirely new approach will be necessary, and that is what Facebook is working on.

Medical News Today spoke with Dr. Michael M. Merzenich, chief scientific officer of Posit Science and co-inventor of the cochlear implant. We asked how Facebook’s researchers will bypass this speed issue, to which he responded, “Facebook has discussed using near-infrared (NIR) imaging technology.” With this technology, each word will be picked out in one go, rather than being spelled out letter by letter.

[Facebook thumbs up like symbol]

There are challenges ahead for the social media giant.

Of course, this comes with its own difficulties. Dr. Merzenich added:

“While it’s very easy to type ‘lion’ versus ‘tiger’ and be clear, it’s going to be quite a bit harder to have a noninvasive brain imaging technology detect minute differences in brain activity that may correspond to small differences in a category like that.”

“Thinking of the word ‘lion’ and the word ‘tiger’ activates extremely similar and overlapping networks of brain activity for most people.”

There is clearly a lot of work yet to do, but Dr. Merzenich is confident that it will be achieved eventually. He added:

“The best hope is to use modern AI [artificial intelligence] techniques – deep learning techniques – that will gradually learn to identify the patterns of brain activity for an individual person as meaning specific things.”

“In this way, I think it’s likely that people will individually train their brain-reading systems, and those systems will be individually attuned to them and not immediately transferable to another person. In fact, people using these systems will likely train their own brains to optimally produce readable signals to these systems. In this way, these systems represent another application of brain plasticity – the ability of the brain to change itself through training.”

This may all be a long way off, but Facebook are committed; they are combining their research power with a number of universities across the United States. The future looks bright for BCIs and, if they do achieve 100 words per minute, it will be a great leap for millions of people who are unable to communicate with ease.

Source: Facebook’s next frontier: Brain-computer interfaces – Medical News Today

, ,

Leave a comment

[ARTICLE] Classification of EEG signals for wrist and grip movements using echo state network – Full Text

Abstract

Brain-Computer Interface (BCI) is a multi-disciplinary emerging technology being used in medical diagnosis and rehabilitation. In this paper, different techniques of classification and feature extraction are applied to analyse and differentiate the wrist and grip flexion and extension for synchronized stimulation using sensory feedback in neuro-rehabilitation of paralyzed persons. We have used an optimized version of Echo State Network (ESN) to identify as well as differentiate the wrist and grip movements. In this work, the classification accuracy obtained is greater than 96% in a single trial and 93% in discrimination of four movements in real and imagination.

Introduction

The popularity of analysing brain rhythms and its applications in healthcare is evident in rehabilitation engineering. Motor disabilities as a consequence of stroke require rehabilitation process to regain the motor learning and retrieval. The classification of EEG signals obtained by using a low cost Brain Computer Interface (BCI) for wrist and grip movements is used for recovery. Using Movement Related Cortical Potential (MRCP) associated with imaginary movement as detected by the BCI, an external device can be synchronized to provide sensory feedback from electrical stimulation [1]. The timely detection, classification of movement and the real time triggering of the electrical stimulation as a function of brain activity is desirable for neuro-rehabilitation [2,3]. Thus, BCI has an active role in helping out the paralyzed persons who are not able to move their hand or leg [4]. Using BCI system, EEG data is recorded and processed. The acquired data should have the least component of environmental noise and artifacts for effective classification [5]. EEG signals acquired from the invasive method are found to exhibit least noise components and higher amplitude. However, in most applications, a non-invasive method is preferred. The human brain contains a number of neuron networks. EEG provides a measurement of brain activity as voltage fluctuations which are recorded as a result of ionic current within neurons present inside the brain [6]. Many people have motor disabilities due to the nerve system breakdown or accidental failure of nerve system. There are different methods to resolve this problem, e.g. neuro-prosthetics (neural prosthetics) and BCI [3,79]. In neuro-prosthetics, a solution of the problem is in the form of connecting brain nerve system with the device and in BCI connecting brain nerve system with computer [2]. BCI produce a communication between brain and computer via EEG, ECOG or MEG signals. These signals contain information of any of our body activity [10]. Moreover, in addition to neuro-rehabilitation, assistive robotics and brain control mobile robots also utilizes similar technologies as reported recently [11,12]. The signal processing of these low amplitude and noisy EEG signals require special care during data acquisition and filtering. After recording EEG measurements, these signals are processed via filtration, feature extraction, and classification. Simple first or second order Chebyshev or Butterworth filter can be used as a low pass, high pass or a notch filter. Some features can be extracted by using one of the techniques from time analysis, frequency analysis, time-frequency analysis or time-space-frequency analysis [13,14]. Extracted EEG signal further classify by using one of the techniques like LDA, QDA, SVM, KNN etc. [15,16].

We aim to classify the wrist and grip movements using EEG signals. This research will be helpful for convalescence of persons having disabilities in wrist or grip. Our work is based on offline data-sets, in which the EEG data is collected multiple times from 4 subjects. We present the following major contributions in this paper: First, the differentiation between the wrist and grip movements has been performed by using imaginary data as well as the real movements. Secondly, we have tested multiple algorithms for feature extraction and classification and used ESN with optimized parameters for best results. This paper is organized as follows: section 2 describes a low-cost BCI setup for EEG, section 3 deals with the DAQ protocol, section 4 explains the echo state network and its optimization while section 5 discusses results obtained in this research. Section 6 concludes the paper.

Brain Computer Interface Design

Brain-Computer Interface (BCI) design requires a multi-disciplinary approach for engineers to observe EEG data. Today, a number of sensing platforms are available which provide a low-cost solution for high-resolution data acquisition. Developing a BCI interface requires a two-step approach namely the acquisition and the real-time processing. In off-line processing, the only requirement is to do the acquisition. The data is acquired via a wireless network from the pick-off electrodes arranged on the scalp of the subjects [17]. One such available system is Emotiv, which is easy to install and use. Emotiv headset with 14 electrodes and 2 reference electrodes, CMD and DRL, is used to collect data as shown in Figure 1. All electrodes have potential with respect to the reference electrode. Emotiv headset is a non-invasive device to collect the EEG data as preferred in most of the diagnosis and rehabilitation applications [18].

biomedres-Emotiv-EEG

Figure 1. Emotiv EEG acquisition using P-300 standard.

It is important to understand the EEG signal format and frequency content for pre-processing and offline classification. Table 1 shows some of the indications of physical movements and mind actions associated with different brain rhythms in somewhat overlapping frequency bands. It is obvious that the motor imagery tasks are associated with the μ-rhythm in 8-13 Hz frequency band [19].

Rhythm Frequency
(Hz)
Indication Diagnosis
Δ 0-4 Deep sleep stage Hypoglycaemia, Epilepsy
υ 4-7 Initial sleep stage
α 8-12 Closure of eyes Migraine, Dementia
β 12-30 Busy/Anxious thinking Encephalopathies, Tonic seizures
γ 30-100 Cognitive/motor function
µ 8-13 Motor imagery tasks Autism Spectrum Disorder

Table 1. Brain frequency bands and their significance.

biomedres-Grip-movement

 

Continue —> Classification of EEG signals for wrist and grip movements using echo state network

, , , , , , , , ,

Leave a comment

[Abstract] Brain–machine interfaces for rehabilitation of poststroke hemiplegia

Abstract

Noninvasive brain–machine interfaces (BMIs) are typically associated with neuroprosthetic applications or communication aids developed to assist in daily life after loss of motor function, eg, in severe paralysis.

However, BMI technology has recently been found to be a powerful tool to promote neural plasticity facilitating motor recovery after brain damage, eg, due to stroke or trauma.

In such BMI paradigms, motor cortical output and input are simultaneously activated, for instance by translating motor cortical activity associated with the attempt to move the paralyzed fingers into actual exoskeleton-driven finger movements, resulting in contingent visual and somatosensory feedback.

Here, we describe the rationale and basic principles underlying such BMI motor rehabilitation paradigms and review recent studies that provide new insights into BMI-related neural plasticity and reorganization.

Current challenges in clinical implementation and the broader use of BMI technology in stroke neurorehabilitation are discussed.

 

Source: Brain–machine interfaces for rehabilitation of poststroke hemiplegia

, , , , , , , ,

Leave a comment

[Abstract] Brain–machine interfaces for rehabilitation of poststroke hemiplegia

Abstract

Noninvasive brain–machine interfaces (BMIs) are typically associated with neuroprosthetic applications or communication aids developed to assist in daily life after loss of motor function, eg, in severe paralysis. However, BMI technology has recently been found to be a powerful tool to promote neural plasticity facilitating motor recovery after brain damage, eg, due to stroke or trauma. In such BMI paradigms, motor cortical output and input are simultaneously activated, for instance by translating motor cortical activity associated with the attempt to move the paralyzed fingers into actual exoskeleton-driven finger movements, resulting in contingent visual and somatosensory feedback. Here, we describe the rationale and basic principles underlying such BMI motor rehabilitation paradigms and review recent studies that provide new insights into BMI-related neural plasticity and reorganization. Current challenges in clinical implementation and the broader use of BMI technology in stroke neurorehabilitation are discussed.

 

Source: Brain–machine interfaces for rehabilitation of poststroke hemiplegia

, , , , , , , , ,

Leave a comment

[ARTICLE] The Cybathlon promotes the development of assistive technology for people with physical disabilities – Full Text

Abstract

Background

The Cybathlon is a new kind of championship, where people with physical disabilities compete against each other at tasks of daily life, with the aid of advanced assistive devices including robotic technologies. The first championship will take place at the Swiss Arena Kloten, Zurich, on 8 October 2016.

The idea

Six disciplines are part of the competition comprising races with powered leg prostheses, powered arm prostheses, functional electrical stimulation driven bikes, powered wheelchairs, powered exoskeletons and brain-computer interfaces. This commentary describes the six disciplines and explains the current technological deficiencies that have to be addressed by the competing teams. These deficiencies at present often lead to disappointment or even rejection of some of the related technologies in daily applications.

Conclusion

The Cybathlon aims to promote the development of useful technologies that facilitate the lives of people with disabilities. In the long run, the developed devices should become affordable and functional for all relevant activities in daily life.

Keywords

Competition, Championship ,Prostheses, Exoskeletons ,Functional electrical stimulation, Wheelchairs, Brain computer interfaces

Background

Millions of people worldwide rely on orthotic, prosthetic, wheelchairs and other assistive devices to improve their qualities of life. In the US there live more than 1.6 million people with limb amputations [1] and the World Health Organization estimates the number of wheelchair users to about 65 million people worldwide [2]. Unfortunately, current assistive technology does not address their needs in an ideal fashion. For instance, wheelchairs cannot climb stairs, arm prostheses do not enable versatile hand functions, and power supplies of many orthotic and prosthetic devices are limited. There is a need to further push the development of assistive devices by pooling the efforts of engineers and clinicians to develop improved technologies, together with the feedback and experiences of the users of the technologies.

The Cybathlon is a new kind of championship with the aim of promoting the development of useful technologies. In contrast with the Paralympics, where parathletes aim to achieve maximum performance, at the Cybathlon, people with physical disabilities compete against each other at tasks of daily life, with the aid of advanced assistive devices including robotic technologies. Most current assistive devices lack satisfactory function; people with disabilities are often disappointed, and thus do not use and accept the technology. Rejection can be due to a lack of communication between developers, people with disabilities, therapists and clinicians, which leads to a disregard of user needs and requirements. Other reasons could be that the health status, level of lesion or financial situation of the potential user are so severe that she or he is unable to use the available technologies. Furthermore, barriers in public environments make the use of assistive technologies often very cumbersome or even impossible.

Six disciplines are part of the competition, addressing people with either limb paralysis or limb amputations. The six disciplines comprise races with powered leg prostheses, powered arm prostheses, functional electrical stimulation (FES) driven bikes, powered wheelchairs and powered exoskeletons (Fig. 1). The sixth discipline is a racing game with virtual avatars that are controlled by brain-computer interfaces (BCI). The functional and assistive devices used can be prototypes developed by research labs or companies, or commercially available products. The competitors are called pilots, as they have to control a device that enhances their mobility. The teams each consist of a pilot together with scientists and technology providers, making the Cybathlon also a competition between companies and research laboratories. As a result there are two awards for each winning team in each discipline: a medal for the person who is controlling the device and a cup for the provider of the device (i.e. the company or the lab).

Fig. 1 Arena with four parallel race tracks designed for the exoskeleton competition. The pilots start at the left and have to overcome six obstacles with increasing difficulty level

Continue —> The Cybathlon promotes the development of assistive technology for people with physical disabilities | Journal of NeuroEngineering and Rehabilitation | Full Text

Download PDF

Download ePub

, , , , , , , , , ,

Leave a comment

[Abstract] Review of functional near-infrared spectroscopy in neurorehabilitation – Neurophotonics – SPIE

Abstract

We provide a brief overview of the research and clinical applications of near-infrared spectroscopy (NIRS) in the neurorehabilitation field. NIRS has several potential advantages and shortcomings as a neuroimaging tool and is suitable for research application in the rehabilitation field.

As one of the main applications of NIRS, we discuss its application as a monitoring tool, including investigating the neural mechanism of functional recovery after brain damage and investigating the neural mechanisms for controlling bipedal locomotion and postural balance in humans. In addition to being a monitoring tool, advances in signal processing techniques allow us to use NIRS as a therapeutic tool in this field.

With a brief summary of recent studies investigating the clinical application of NIRS using motor imagery task, we discuss the possible clinical usage of NIRS in brain–computer interface and neurofeedback.

NPH_3_3_031414_f001.png

Source: Review of functional near-infrared spectroscopy in neurorehabilitation | Neurophotonics | SPIE

, , , , , , ,

Leave a comment

[Abstract] A review of the progression and future implications of brain-computer interface therapies for restoration of distal upper extremity motor function after stroke.

ABSTRACT

Stroke is a leading cause of acquired disability resulting in distal upper extremity functional motor impairment. Stroke mortality rates continue to decline with advances in healthcare and medical technology. This has led to an increased demand for advanced, personalized rehabilitation.
Survivors often experience some level of spontaneous recovery shortly after their stroke event, yet reach a functional plateau after which there is exiguous motor recovery. Nevertheless, studies have demonstrated the potential for recovery beyond this plateau.
Non-traditional neurorehabilitation techniques, such as those incorporating the brain-computer interface (BCI), are being investigated for rehabilitation. BCIs may offer a gateway to the brain’s plasticity and revolutionize how humans interact with the world.
Non-invasive BCIs work by closing the proprioceptive feedback loop with real-time, multi-sensory feedback allowing for volitional modulation of brain signals to assist hand function. BCI technology potentially promotes neuroplasticity and Hebbian-based motor recovery by rewarding cortical activity associated with sensory-motor rhythms through use with a variety of self-guided and assistive modalities.

Related articles

View all related articles

Source: A review of the progression and future implications of brain-computer interface therapies for restoration of distal upper extremity motor function after stroke – Expert Review of Medical Devices – Volume 13, Issue 5

, , , , , , , , , ,

Leave a comment

[Abstract] Brain–machine interfaces for rehabilitation of poststroke hemiplegia

Abstract

Noninvasive brain–machine interfaces (BMIs) are typically associated with neuroprosthetic applications or communication aids developed to assist in daily life after loss of motor function, eg, in severe paralysis. However, BMI technology has recently been found to be a powerful tool to promote neural plasticity facilitating motor recovery after brain damage, eg, due to stroke or trauma. In such BMI paradigms, motor cortical output and input are simultaneously activated, for instance by translating motor cortical activity associated with the attempt to move the paralyzed fingers into actual exoskeleton-driven finger movements, resulting in contingent visual and somatosensory feedback. Here, we describe the rationale and basic principles underlying such BMI motor rehabilitation paradigms and review recent studies that provide new insights into BMI-related neural plasticity and reorganization. Current challenges in clinical implementation and the broader use of BMI technology in stroke neurorehabilitation are discussed.

Keywords

 

Source: Brain–machine interfaces for rehabilitation of poststroke hemiplegia

, , , , , , , ,

Leave a comment

%d bloggers like this: