Posts Tagged Brain Computer Interface

[Abstract] Combined rTMS and virtual reality brain-computer interface training for motor recovery after stroke

Abstract

Objective. Combining repetitive transcranial magnetic stimulation (rTMS) with brain-computer interface (BCI) training can address motor impairment after stroke by down-regulating exaggerated inhibition from the contralesional hemisphere and encouraging ipsilesional activation. The objective was to evaluate the efficacy of combined rTMS+BCI, compared to sham rTMS+BCI, on motor recovery after stroke in subjects with lasting motor paresis. Approach. Three stroke subjects approximately one year post-stroke participated in three weeks of combined rTMS (real or sham) and BCI, followed by three weeks of BCI alone. Behavioral and electrophysiological differences were evaluated at baseline, after three weeks, and after six weeks of treatment. Main Results. Motor improvements were observed in both real rTMS+BCI and sham groups, but only the former showed significant alterations in inter-hemispheric inhibition in the desired direction and increased relative ipsilesional cortical activation from fMRI. In addition, significant improvements in BCI performance over time and adequate control of the virtual reality BCI paradigm were observed only in the former group. Significance. When combined, the results highlight the feasibility and efficacy of combined rTMS+BCI for motor recovery, demonstrated by increased ipsilesional motor activity and improvements in behavioral function for the real rTMS+BCI condition in particular. Our findings also demonstrate the utility of BCI training alone, as demonstrated by behavioral improvements for the sham rTMS+BCI condition. This study is the first to evaluate combined rTMS and BCI training for motor rehabilitation and provides a foundation for continued work to evaluate the potential of both rTMS and virtual reality BCI training for motor recovery after stroke.

Source: Combined rTMS and virtual reality brain-computer interface training for motor recovery after stroke – IOPscience

Advertisements

, , , , , ,

Leave a comment

[BLOG POST] Brain Computer Interfaces (That Translate Human Thought To Direct Action): Their Evolution And Future

a graphic depicting two brains connected to each other. Two human outlines are shown facing each other. The connection between the two brains is shown via dots.

In the last few years, we have read quite a bit about how technology has allowed our brain to control devices or objects around us without the use of limbs. (If you haven’t, you can read about some examples herehere, and here). Futurism.com, a great website that posts about how human potential can be maximized, has this infographic that explains the basics of Brain Computer Interfaces – the use of technology to translate human thoughts into machine commands. We are seeing the use of BCI more and more with prosthetic limbs but where does it end? Will we able to upload our memories straight from our brain to the cloud in the future? Sky is the limit when it comes to innovation through technology.

Read this infographic to know the types of Brain-Computer Interfaces, their origin, what they have in store for us in the future, and how they can bridge the gap between disabled and able-bodied. Text version of infographic is right below the image.

The Evolution of Brain Computer Interfaces. Text is available in the original post right below the image.

 

Imagine a world where machines can be controlled by thought alone. This is the promise of brain-computer interfaces (BCIs) – using computers to decode and translate human thoughts into machine commands. Here’s a look at the evolution of BCI technology, its current state, and future prospects.

Invasive: Signal-transmitting devices are implanted directly in the brain’s gray matter. This method produces the highest quality signals, but scar tissue build up can cause signal degradation.

Partially Invasive: Devices are implanted within the skull but not within the brain tissues. Produce higher quality signals than noninvasive techniques by circumventing the skull’s dampening effect on transmissions, and has less risk of scar tissue buildup.

Noninvasive: Involves simple wearables that register the EM transmissions of neurons, with no expensive or dangerous surgery needed. This technique is certainly easier, but suffers from poor resolution caused by the skull’s interference with signals.

A Short History of BCI

1924: German neuroscientist Hans Berger discovers neuroelectrical activity using electroencephalography (EEG).

1970: The Defense Advanced Research Projects Agency (DARPA) begins to explore the potential BCI applications of EEG technology.

1998: First brain implant produces high quality signals.

2005: A monkey’s brain is successfully used to control a robotic arm.

2014: Direct brain-to-brain communication achieved by transmitting EEG signals over the internet.

Types of Noninvasive BCI

  • Eye movement and pupil size oscillation
  • Electroencephalography
  • Magnetic resonance imaging and magnetoencephalography

Applications of BCI

  • Direct mental control of prosthetic limbs.
  • Neurogaming – interaction within video game and virtual reality environments without the need for clumsy interface.
  • Synthetic telepathy – the establishment of a direct mental connection or communications pathway between minds.
  • The use of BCI in tele-robotics will allow human operators to directly “link” with robotic machines. – granting us a new way to explore aliens worlds, handle dangerous materials, and perform remote surgery.
  • A wealth of new possibilities for interfacing with computers opens up – including linking to the internet, uploading memories to the cloud, etc.
    It will effectively erase the divide between the disabled and the able-bodied.

Sources:

National Academy of Engineering, Techradar, Brain Vision UK, PLOS ONE

This infographic was originally posted on futurism.com.

Source: Brain Computer Interfaces (That Translate Human Thought To Direct Action): Their Evolution And Future – Assistive Technology Blog

, , , , , , , ,

Leave a comment

[ARTICLE] Post-stroke Rehabilitation Training with a Motor-Imagery-Based Brain-Computer Interface (BCI)-Controlled Hand Exoskeleton: A Randomized Controlled Multicenter Trial – Full Text

Repeated use of brain-computer interfaces (BCIs) providing contingent sensory feedback of brain activity was recently proposed as a rehabilitation approach to restore motor function after stroke or spinal cord lesions. However, there are only a few clinical studies that investigate feasibility and effectiveness of such an approach. Here we report on a placebo-controlled, multicenter clinical trial that investigated whether stroke survivors with severe upper limb (UL) paralysis benefit from 10 BCI training sessions each lasting up to 40 min. A total of 74 patients participated: median time since stroke is 8 months, 25 and 75% quartiles [3.0; 13.0]; median severity of UL paralysis is 4.5 points [0.0; 30.0] as measured by the Action Research Arm Test, ARAT, and 19.5 points [11.0; 40.0] as measured by the Fugl-Meyer Motor Assessment, FMMA. Patients in the BCI group (n = 55) performed motor imagery of opening their affected hand. Motor imagery-related brain electroencephalographic activity was translated into contingent hand exoskeleton-driven opening movements of the affected hand. In a control group (n = 19), hand exoskeleton-driven opening movements of the affected hand were independent of brain electroencephalographic activity. Evaluation of the UL clinical assessments indicated that both groups improved, but only the BCI group showed an improvement in the ARAT’s grasp score from 0 [0.0; 14.0] to 3.0 [0.0; 15.0] points (p < 0.01) and pinch scores from 0.0 [0.0; 7.0] to 1.0 [0.0; 12.0] points (p < 0.01). Upon training completion, 21.8% and 36.4% of the patients in the BCI group improved their ARAT and FMMA scores respectively. The corresponding numbers for the control group were 5.1% (ARAT) and 15.8% (FMMA). These results suggests that adding BCI control to exoskeleton-assisted physical therapy can improve post-stroke rehabilitation outcomes. Both maximum and mean values of the percentage of successfully decoded imagery-related EEG activity, were higher than chance level. A correlation between the classification accuracy and the improvement in the upper extremity function was found. An improvement of motor function was found for patients with different duration, severity and location of the stroke.

Introduction

Motor imagery (Page et al., 2001), or mental practice, attracted considerable interest as a potential neurorehabilitation technique improving motor recovery following stroke (Jackson et al., 2001). According to the Guidelines for adult stroke rehabilitation and recovery (Winstein et al., 2016), mental practice may proof beneficial as an adjunct to upper extremity rehabilitation services (Winstein et al., 2016). Several studies suggest that motor imagery can trigger neuroplasticity in ipsilesional motor cortical areas despite severe paralysis after stroke (Grosse-Wentrup et al., 2011Shih et al., 2012Mokienko et al., 2013bSoekadar et al., 2015).

The effect of motor imagery on motor function and neuroplasticity has been demonstrated in numerous neurophysiological studies in healthy subjects. Motor imagery has been shown to activate the primary motor cortex (M1) and brain structures involved in planning and control of voluntary movements (Shih et al., 2012Mokienko et al., 2013a,bFrolov et al., 2014). For example, it was shown that motor imagery of fist clenching reduces the excitation threshold of motor evoked potentials (MEP) elicited by transcranial magnetic stimulation (TMS) delivered to M1 (Mokienko et al., 2013b).

As motor imagery results in specific modulations of brain electroencephalographic (EEG) signals, e.g., sensorimotor rhythms (SMR) (Pfurtscheller and Aranibar, 1979), it can be used to voluntarily control an external device, e.g., a robot or exoskeleton using a brain-computer interface (BCI) (Nicolas-Alonso and Gomez-Gil, 2012). Such system allowing for voluntary control of an exoskeleton moving a paralyzed limb can be used as an assistive device restoring lost function (Maciejasz et al., 2014). Besides receiving visual feedback, the user receives haptic and kinesthetic feedback which is contingent upon the imagination of a specific movement.

Several BCI studies involving this type of haptic and kinesthetic feedback have demonstrated improvements in clinical parameters of post-stroke motor recovery (Ramos-Murguialday et al., 2013Ang et al., 20142015Ono et al., 2014). The number of subjects with post-stroke upper extremity paresis included in these studies was, however, relatively low [from 12 (Ono et al., 2014) to 32 (Ramos-Murguialday et al., 2013) patients]. As BCI-driven external devices, a haptic knob (Ang et al., 2014), MIT-Manus (Ang et al., 2015), or a custom-made orthotic device (Ramos-Murguialday et al., 2013Ono et al., 2014) were used. Furthermore, several other studies reported on using BCI-driven exoskeletons in patients with post-stroke hand paresis (Biryukova et al., 2016Kotov et al., 2016Mokienko et al., 2016), but these reports did not test for clinical efficacy and did not include a control group. While very promising, it still remains unclear whether BCI training is an effective tool to facilitate motor recovery after stroke or other lesions of the central nervous system (CNS) (Teo and Chew, 2014).

Here we report a randomized and controlled multicenter study investigating whether 10 sessions of BCI-controlled hand-exoskeleton active training after subacute and chronic stroke yields a better clinical outcome than 10 sessions in which hand-exoskeleton induced passive movements were not controlled by motor imagery-related modulations of brain activity. Besides assessing the effect of BCI training on clinical scores such as the ARAT and FMMA, we tested whether improvements in the upper extremity function correlates with the patient’s ability to generate motor imagery-related modulations of EEG activity.[…]

Continue —> Frontiers | Post-stroke Rehabilitation Training with a Motor-Imagery-Based Brain-Computer Interface (BCI)-Controlled Hand Exoskeleton: A Randomized Controlled Multicenter Trial | Neuroscience

 

Figure 1. The subject flow diagram from recruitment through analysis (Consolidated Standards of Reporting Trials flow diagram).

, , , , , ,

Leave a comment

[WEB SITE] Stroke rehabilitation gets personalised and interactive – CORDIS

Stroke rehabilitation gets personalised and interactive

The significant socioeconomic costs of stroke coupled with the rise in Europe’s ageing population highlights the need for effective but affordable stroke rehabilitation programmes. EU researchers made considerable headway in this regard through novel rehabilitation paradigms.
Stroke rehabilitation gets personalised and interactive
Computer-mediated rehabilitation tools require a high degree of motor control and are therefore inadequate for patients with significant impairment in motor control. Consequently, many stroke survivors are unable to benefit. The REHABNET (REHABNET: Neuroscience based interactive systems for motor rehabilitation) project came up with an innovative approach to address this critical need.

Researchers successfully developed a hybrid brain-computer interface (BCI)-virtual reality (VR) system that assesses user capability and dynamically adjusts its difficulty level. This motor imagery-based BCI system is tailored to meet the needs of patients using a VR environment for game training coupled with neurofeedback through multimodal sensing technologies.

The game training scenarios address both cognitive and motor abilities. The four rehabilitation scenarios include bimanual motor training, dual motor cognitive-motor training and a simulated city for training on daily living activities.

Pilot and longitudinal studies demonstrated the benefits of longitudinal VR training as compared to existing rehabilitation regimens. The self-report questionnaires also revealed a high user acceptance of the novel system.

Designed for at-home use, the REHABNET toolset is platform-independent and freely available globally as an app (Reh@Mote). Besides deeper insight on factors affecting stroke recovery, this could aid in further improvement of rehabilitation strategies. More importantly, these low-cost toolsets could also address the needs of patients with severe motor and cognitive deficits. Efforts are ongoing to facilitate future commercial exploitation through a technology transfer agreement.

Related information

Source: European Commission : CORDIS : Projects and Results : Stroke rehabilitation gets personalised and interactive

, , , , , , , ,

Leave a comment

[Review] Review of devices used in neuromuscular electrical stimulation for stroke rehabilitation – PDF

Abstract

Neuromuscular electrical stimulation (NMES), specifically functional electrical stimulation (FES) that compensates for voluntary motion, and therapeutic electrical stimulation (TES) aimed at muscle strengthening and recovery from paralysis are widely used in stroke rehabilitation. The electrical stimulation of muscle contraction should be synchronized with intended motion to restore paralysis. Therefore, NMES devices, which monitor electromyogram (EMG) or electroencephalogram (EEG) changes with motor intention and use them as a trigger, have been developed. Devices that modify the current intensity of NMES, based on EMG or EEG, have also been proposed. Given the diversity in devices and stimulation methods of NMES, the aim of the current review was to introduce some commercial FES and TES devices and application methods, which depend on the condition of the patient with stroke, including the degree of paralysis.

Download Full Text PDF

, , , ,

Leave a comment

[CORDIS Project] Motor Recovery with Paired Associative Stimulation (RecoveriX) – European Commission

Motor Recovery with Paired Associative Stimulation (RecoveriX)

Objective

Source: European Commission : CORDIS : Projects and Results : Motor Recovery with Paired Associative Stimulation (RecoveriX)

, , , , , , ,

Leave a comment

[ARTICLE] Personalized Brain-Computer Interface Models for Motor Rehabilitation – Full Text PDF

Abstract

We propose to fuse two currently separate research lines on novel therapies for stroke rehabilitation: brain-computer interface (BCI) training and transcranial electrical stimulation (TES). Specifically, we show that BCI technology can be used to learn personalized decoding models that relate the global configuration of brain rhythms in individual subjects (as measured by EEG) to their motor performance during 3D reaching movements. We demonstrate that our models capture substantial across-subject heterogeneity, and argue that this heterogeneity is a likely cause of limited effect sizes observed in TES for enhancing motor performance. We conclude by discussing how our personalized models can be used to derive optimal TES parameters, e.g., stimulation site and frequency, for individual patients.

I. INTRODUCTION
Motor deficits are one of the most common outcomes of stroke. According to the World Health Organization, 15 million people worldwide suffer a stroke each year. Of these, five million are permanently disabled. For this third, upper limb weakness and loss of hand function are among the most devastating types of disabilities, which affect the quality of their daily life [1]. Despite a wide range of rehabilitation therapies, including medication treatment [2], conventional physiotherapy [3], and robot physiotherapy [4], only approximately 20% of patients achieve some form of functional recovery in the first six months [5], [6].

Current research on novel therapies includes neurofeedback training based on brain-computer interface (BCI) technology and transcranial electrical stimulation (TES). The former approach attempts to support cortical reorganization by providing haptic feedback with a robotic exoskeleton that is congruent to movement attempts, as decoded in real-time from neuroimaging data [7], [8]. The latter type of research aims to reorganize cortical networks in a way that supports motor performance, because post-stroke alterations of cortical networks have been found to correlate with the severity of motor deficits [9], [10]. While initial evidence suggested that both approaches, BCIbased training [11] and TES [12], have a positive impact, the significance of these results over conventional physiotherapy was not always achieved by different studies [13], [14], [15].

One potential explanation for the difficulty to replicate the initially promising findings is the heterogeneity of stroke patients. Different locations of stroke-induced structural changes
are likely to result in substantial across-patient variance in the functional reorganization of cortical networks. As a result, not all patients may benefit from the same neurofeedback or stimulation protocol. We thus propose to fuse these two research themes and use BCI technology to learn personalized models that relate the configuration of cortical networks to each patient’s motor deficits. These personalized models may then be used to predict which TES parameters, e.g., spatial location and frequency band, optimally support rehabilitation in each individual patient.

In this study, we address the first step towards personalized TES for stroke rehabilitation. Using a transfer learning framework developed in our group [16], we show how to create personalized decoding models that relate the EEG of healthy subjects during a 3D reaching task to their motor performance in individual trials. We further demonstrate that the resulting decoding models capture substantial acrosssubject heterogeneity, thereby providing empirical support for the need to personalize models. We conclude by reviewing our findings in the light of TES studies to improve motor performance in healthy subjects, and discuss how personalized TES parameters may be derived from our models.[…]

Full Text PDF

, , , ,

Leave a comment

[WEB SITE] Neuroprosthetics: Recovering from injury using the power of your mind

Neuroprosthetics, also known as brain-computer interfaces, are devices that help people with motor or sensory disabilities to regain control of their senses and movements by creating a connection between the brain and a computer. In other words, this technology enables people to move, hear, see, and touch using the power of thought alone. How do neuroprosthetics work? We take a look at five major breakthroughs in this field to see how far we have come – and how much farther we can go – using just the power of our minds.
woman with electrodes attached to skull]

Using electrodes, a computer, and the power of thought, neuroprosthetic devices can help patients with motor or sensory difficulties to move, feel, hear, and see.

Every year, hundreds of thousands of people worldwide lose control of their limbs as a result of an injury to their spinal cord. In the United States, up to 347,000 people are living with spinal cord injury (SCI), and almost half of these people cannot move from the neck down.

For these people, neuroprosthetic devices can offer some much-needed hope.

Brain-computer interfaces (BCI) usually involve electrodes – placed on the human skull, on the brain’s surface, or in the brain’s tissue – that monitor and measure the brain activity that occurs when the brain “thinks” a thought. The pattern of this brain activity is then “translated” into a code, or algorithm, which is “fed” into a computer. The computer, in turn, transforms the code into commands that produce movement.

Neuroprosthetics are not just useful for people who cannot move their arms and legs; they also help those with sensory disabilities. The World Health Organization (WHO) estimate that approximately 360 million people across the globe have a disabling form of hearing loss, while another 39 million people are blind.

For some of these people, neuroprosthetics such as cochlear implants and bionic eyes have given them back their senses and, in some cases, they have enabled them to hear or see for the very first time.

Here, we review five of the most significant developments in neuroprosthetic technology, looking at how they work, why they are helpful, and how some of them will develop in the future.

Ear implant

Probably the “oldest” neuroprosthetic device out there, cochlear implants (or ear implants) have been around for a few decades and are the epitome of successful neuroprosthetics.

The U.S. Food and Drug Administration (FDA) approved cochlear implants as early as 1980, and by 2012, almost 60,000 U.S. individuals had had the implant. Worldwide, more than 320,000 people have had the device implanted.

A cochlear implant works by bypassing the damaged parts of the ear and stimulating the auditory nerve with signals obtained using electrodes. The signals relayed through the auditory nerve to the brain are perceived as sounds, although hearing through an ear implant is quite different from regular hearing.

Although imperfect, cochlear implants allow users to distinguish speech in person or over the phone, with the media abound with emotional accounts of people who were able to hear themselves for the first time using this sensory neuroprosthetic device.

Here, you can watch a video of a 29-year-old woman who hears herself for the first time using a cochlear implant:

Eye implant

The first artificial retina – called the Argus II – is made entirely from electrodes implanted in the eye and was approved by the FDA in February 2013. In much the same way as the cochlear implant, this neuroprosthetic bypasses the damaged part of the retina and transmits signals, captured by an attached camera, to the brain.

This is done by transforming the images into light and dark pixels that get turned into electrical signals. The electrical signals are then sent to the electrodes, which, in turn, send the signal to the brain’s optic nerve.

While Argus II does not restore vision completely, it does enable patients with retinitis pigmentosa – a condition that damages the eye’s photoreceptors – to distinguish contours and shapes, which, many patients report, makes a significant difference in their lives.

Retinitis pigmentosa is a neurodegenerative disease that affects around 100,000 people in the U.S. Since its approval, more than 200 patients with retinitis pigmentosa have had the Argus II implant, and the company that designed it is currently working to make color detection possible as well as improve the resolution of the device.

Neuroprosthetics for people with SCI

Almost 350,000 people in the U.S. are estimated to live with SCI, and 45 percent of those who had an SCI since 2010 are considered tetraplegic – that is, paralyzed from the neck down.

At Medical News Today, we recently reported on a groundbreaking one-patient experiment that enabled a man with quadriplegia to move his arms using the sheer power of his thoughts.

Bill Kochevar had electrodes surgically fitted into his brain. After training the BCI to “learn” the brain activity that matched the movements he thought about, this activity was turned into electrical pulses that were then transmitted back to the electrodes in his brain.

In much the same way that the cochlear and visual implants bypass the damaged area, so too does this BCI area avoid the “short circuit” between the brain and the patient’s muscles created by SCI.

With the help of this neuroprosthetic, the patient was able to successfully drink and feed himself. “It was amazing,” Kochevar says, “because I thought about moving my arm and it did.” Kochevar was the first patient in the world to test the neuroprosthetic device, which is currently only available for research purposes.

You can learn more about this neuroprosthetic from the video below:

However, this is not where SCI neuroprosthetics stop. The Courtine Lab – which is led by neuroscientist Gregoire Courtine in Lausanne, Switzerland – is tirelessly working to help injured people to regain control of their legs. Their research efforts with rats have enabled paralyzed rodents to walk, achieved by using electrical signals and making them stimulate nerves in the severed spinal cord.

“We believe that this technology could one day significantly improve the quality of life of people confronted with neurological disorders,” says Silvestro Micera, co-author of the experiment and neuroengineer at Courtine Labs.

Recently, Prof. Courtine has also led an international team of researchers to successfully create voluntary leg movement in rhesus monkeys. This was the first time that a neuroprosthetic was used to enable walking in nonhuman primates.

However, “it may take several years before all the components of this intervention can be tested in people,” Prof. Courtine says.

An arm that feels

Silvestro Micera has also led other projects on neuroprosthetics, among which is the arm that “feels.” In 2014, MNT reportedon the first artificial hand that was enhanced with sensors.

Researchers measured the tension in the tendons of the artificial hand that control grasping movements and turned it into electric current. In turn, using an algorithm, this was translated into impulses that were then sent to the nerves in the arm, producing a sense of touch.

Since then, the prosthetic arm that “feels” has been improved even more. Researchers from the University of Pittsburgh and the University of Pittsburgh Medical Center, both in Pennsylvania, tested the BCI on a single patient with quadriplegia: Nathan Copeland.

The scientists implanted a sheath of microelectrodes below the surface of Copeland’s brain – namely, in his primary somatosensory cortex – and connected them to a prosthetic arm that was fitted with sensors. This enabled the patient to feel sensations of touch, which felt, to him, as though they belonged to his own paralyzed hand.

While blindfolded, Copeland was able to identify which finger on his prosthetic arm was being touched. The sensations he perceived varied in intensity and were felt as differing in pressure. 

Neuroprosthetics for neurons?

We have seen that brain-controlled prosthetics can restore patients’ sense of touch, hearing, sight, and movement, but could we build prosthetics for the brain itself?

Researchers from the Australian National University (ANU) in Canberra managed to artificially grow brain cells and create functional brain circuits, paving the way for neuroprosthetics for the brain.

By applying nanowire geometry to a semiconductor wafer, Dr. Vini Gautam, of ANU’s Research School of Engineering, and colleagues came up with a scaffolding that allows brain cells to grow and connect synaptically.

Project group leader Dr. Vincent Daria, from the John Curtin School of Medical Research in Australia, explains the success of their research:

We were able to make predictive connections between the neurons and demonstrated them to be functional with neurons firing synchronously. This work could open up a new research model that builds up a stronger connection between materials nanotechnology with neuroscience.”

Neuroprosthetics for the brain might one day help patients who have experienced a stroke or who live with neurodegenerative diseases to recover neurologically.

Every year in the U.S., almost 800,000 people have had a stroke, and more than 130,000 people die from it. Neurodegenerative diseases are also widespread, with 5 million U.S. adults estimated to live with Alzheimer’s disease, 1 million to have Parkinson’s, and 400,000 to experience multiple sclerosis.

Learn about Facebook’s newest endeavour: the development of BCIs.

Source: Neuroprosthetics: Recovering from injury using the power of your mind – Medical News Today

, , , , , , ,

Leave a comment

[BLOG POST] Facebook’s next frontier: Brain-computer interfaces

Facebook’s tech development team are currently working on a way for users to type with their minds, without the need for an invasive implant. Updating your status with thoughts alone may one day become a reality.
[Brain plugged in with wires]

Brain-computer interfaces are entering a brave new era.

The social media company’s 60-strong team hopes to achieve this miraculous feat using optical imaging that scans the brain hundreds of times per second, detecting our silent internal dialogues and translating them into text on a screen.

They hope that, eventually, the technology will allow users to type at 100 words per minute – five times faster than typing on a phone.

If this innovation comes to pass, it will be fascinating for Facebook’s following. There will, however, be deeper and more profound ramifications for people who do not have full use of their limbs.

Brain-computer interfaces (BCIs) that allow users to type with their minds are already available, but they are either slow or require a sensor to be implanted in the brain. This procedure is expensive, risky, and not likely to be adopted by the population at large.

If so-called brain typing could be perfected without the need for intrusive implants, it would be a genuine game-changer with a whole host of applications.

BCIs, then and now

The first steps toward developing a BCI came with Hans Berger’s discovery that the brain was electrically active. Each time an individual nerve cell sends a message, it is accompanied by a tiny electrical signal that nips from neuron to neuron.

This electrical signal can be picked up outside of the skull using an electroencephalogram (EEG). Berger was the first person to record human brain activity using an EEG, having achieved this feat almost a century ago, in 1924.

The term “brain-computer interface” was coined in the 1970s, in papers written by scientists from the University of California-Los Angeles. The research was led by Jacques Vidal, who is now considered the grandfather of BCI.

Can these observable electrical brain signals be put to work as carriers of information in man-computer communication or for the purpose of controlling such external apparatus as prosthetic devices or spaceships?”

Jacques Vidal, “Toward direct brain-computer communication,” 1973

Of course, animal studies were the first port of call when investigating BCIs. Research in the late 1960s and early 1970s proved that monkeys could learn to control the firing rates of single neurons or groups of neurons in the primary motor cortex if they were given a reward. Similarly, using operant conditioning, dogs could be trained to control the rhythms in their hippocampus.

These early studies showed that the electrical output of the brain could be measured and manipulated. Over the past two decades, there has been a surge of interest in BCIs. There is still a long way to go, but there have been notable successes.

In modern BCIs, the cream of the experimental crop is a recently designed system from Stanford University. Two aspirin-sized implants, inserted into an individual’s brain, chart the activity of the motor cortex – a region that controls muscles. Algorithms then interpret this activity and convert it into cursor movements on a screen.

In a recent study, one participant was able to type 39 characters (around eight words) per minute. “This study reports the highest speed and accuracy, by a factor of three, over what’s been shown before,” says Krishna Shenoy, one of the senior authors.

Invasive, semi-invasive, and noninvasive

Broadly speaking, modern BCIs are split into three groups. These are:

  • Invasive BCIs: Implants are placed directly into the brain. Software is trained to interpret a subject’s brain activity. For instance, a computer cursor can be controlled by a participant’s thoughts of “left,” “right,” “up,” and “down.” With enough practice, a user can draw shapes on a screen, control a television, and open computer programs.
  • Semi-invasive BCIs: This type of device is implanted inside the skull but does not sit within the gray matter itself. Although less invasive than an invasive BCI, implants left under the skull for long periods of time tend to form scar tissue in the gray matter, which, eventually, blocks the signals and renders them unusable.
  • Noninvasive BCIs: These work on the same principle, but do not involve surgical implantation and have, therefore, received the most research.

Of the noninvasive BCIs, the most common type are EEG-based BCIs. These read the electrical activity of the brain from outside of the body. However, because the skull scatters the electrical signals substantially, making them accurate is a real challenge. Added to this issue, they often take a fair amount of calibration before each use. That being said, there have been some significant steps forward over recent years.

For instance, some researchers have recently investigated noninvasive BCIs as a way to help individuals with amyotrophic lateral sclerosis and brain stem stroke. These patients can become “locked in,” meaning that they lose the use of all voluntary muscles and, as such, have no way to communicate, despite being cognitively “normal.”

Their studies led them to conclude that “BCI use may be of benefit to those with locked-in syndrome.”

How do noninvasive BCIs work?

BCI technology is based on detecting electrical activity emanating from the brain and then converting it into an external action. However, through the cacophony of neural noise, which signals should be paid attention to?

There are a number of signal types that noninvasive BCIs use, the most popular of which is the P300 event-related potential.

An event-related potential is a measurable brain response to a particular stimulus – specifically, the P300 is produced during decision-making and it is usually elicited experimentally using the so-called oddball paradigm.

[EEG cap on woman]

BCIs are based on converting brain activity into external action.

In the oddball paradigm, participants are presented with a range of symbols, flashed in front of their eyes one by one.

They are asked to look out for a specific symbol that occurs only rarely within the selection. When the target symbol is noticed by the participant, it triggers a P300 wave.

Over many trials, it is possible to distinguish the P300 from other electrical signals; it is easiest to observe emanating from the parietal lobe, a part of the brain responsible, in part, for integrating sensory information.

Once an algorithm is trained to recognize an individual’s P300, it can, from then on, understand what they are looking for. For instance, if the user is typing a word and they wish to start with the letter “a,” when that letter appears on the screen, a P300 will be generated by the brain, the software will recognize it, and the letter “a” is typed on the screen.

Compared with other similar methods, P300s are relatively fast, require little training (hours rather than days), and are effective for most users.

However, there are still shortfalls. Because the system needs to pick up a user’s response to individual characters, it has to run through a list before it can find the right one. This means that there is a limit to how fast one can type.

There are ways to minimize this wait, but the time taken is still longer than researchers (and users) would like.

How will Facebook achieve 100 words per minute?

To make a system that can type tens of words per minute, a new step in the process will be needed – in fact, an entirely new approach will be necessary, and that is what Facebook is working on.

Medical News Today spoke with Dr. Michael M. Merzenich, chief scientific officer of Posit Science and co-inventor of the cochlear implant. We asked how Facebook’s researchers will bypass this speed issue, to which he responded, “Facebook has discussed using near-infrared (NIR) imaging technology.” With this technology, each word will be picked out in one go, rather than being spelled out letter by letter.

[Facebook thumbs up like symbol]

There are challenges ahead for the social media giant.

Of course, this comes with its own difficulties. Dr. Merzenich added:

“While it’s very easy to type ‘lion’ versus ‘tiger’ and be clear, it’s going to be quite a bit harder to have a noninvasive brain imaging technology detect minute differences in brain activity that may correspond to small differences in a category like that.”

“Thinking of the word ‘lion’ and the word ‘tiger’ activates extremely similar and overlapping networks of brain activity for most people.”

There is clearly a lot of work yet to do, but Dr. Merzenich is confident that it will be achieved eventually. He added:

“The best hope is to use modern AI [artificial intelligence] techniques – deep learning techniques – that will gradually learn to identify the patterns of brain activity for an individual person as meaning specific things.”

“In this way, I think it’s likely that people will individually train their brain-reading systems, and those systems will be individually attuned to them and not immediately transferable to another person. In fact, people using these systems will likely train their own brains to optimally produce readable signals to these systems. In this way, these systems represent another application of brain plasticity – the ability of the brain to change itself through training.”

This may all be a long way off, but Facebook are committed; they are combining their research power with a number of universities across the United States. The future looks bright for BCIs and, if they do achieve 100 words per minute, it will be a great leap for millions of people who are unable to communicate with ease.

Source: Facebook’s next frontier: Brain-computer interfaces – Medical News Today

, ,

Leave a comment

[ARTICLE] Classification of EEG signals for wrist and grip movements using echo state network – Full Text

Abstract

Brain-Computer Interface (BCI) is a multi-disciplinary emerging technology being used in medical diagnosis and rehabilitation. In this paper, different techniques of classification and feature extraction are applied to analyse and differentiate the wrist and grip flexion and extension for synchronized stimulation using sensory feedback in neuro-rehabilitation of paralyzed persons. We have used an optimized version of Echo State Network (ESN) to identify as well as differentiate the wrist and grip movements. In this work, the classification accuracy obtained is greater than 96% in a single trial and 93% in discrimination of four movements in real and imagination.

Introduction

The popularity of analysing brain rhythms and its applications in healthcare is evident in rehabilitation engineering. Motor disabilities as a consequence of stroke require rehabilitation process to regain the motor learning and retrieval. The classification of EEG signals obtained by using a low cost Brain Computer Interface (BCI) for wrist and grip movements is used for recovery. Using Movement Related Cortical Potential (MRCP) associated with imaginary movement as detected by the BCI, an external device can be synchronized to provide sensory feedback from electrical stimulation [1]. The timely detection, classification of movement and the real time triggering of the electrical stimulation as a function of brain activity is desirable for neuro-rehabilitation [2,3]. Thus, BCI has an active role in helping out the paralyzed persons who are not able to move their hand or leg [4]. Using BCI system, EEG data is recorded and processed. The acquired data should have the least component of environmental noise and artifacts for effective classification [5]. EEG signals acquired from the invasive method are found to exhibit least noise components and higher amplitude. However, in most applications, a non-invasive method is preferred. The human brain contains a number of neuron networks. EEG provides a measurement of brain activity as voltage fluctuations which are recorded as a result of ionic current within neurons present inside the brain [6]. Many people have motor disabilities due to the nerve system breakdown or accidental failure of nerve system. There are different methods to resolve this problem, e.g. neuro-prosthetics (neural prosthetics) and BCI [3,79]. In neuro-prosthetics, a solution of the problem is in the form of connecting brain nerve system with the device and in BCI connecting brain nerve system with computer [2]. BCI produce a communication between brain and computer via EEG, ECOG or MEG signals. These signals contain information of any of our body activity [10]. Moreover, in addition to neuro-rehabilitation, assistive robotics and brain control mobile robots also utilizes similar technologies as reported recently [11,12]. The signal processing of these low amplitude and noisy EEG signals require special care during data acquisition and filtering. After recording EEG measurements, these signals are processed via filtration, feature extraction, and classification. Simple first or second order Chebyshev or Butterworth filter can be used as a low pass, high pass or a notch filter. Some features can be extracted by using one of the techniques from time analysis, frequency analysis, time-frequency analysis or time-space-frequency analysis [13,14]. Extracted EEG signal further classify by using one of the techniques like LDA, QDA, SVM, KNN etc. [15,16].

We aim to classify the wrist and grip movements using EEG signals. This research will be helpful for convalescence of persons having disabilities in wrist or grip. Our work is based on offline data-sets, in which the EEG data is collected multiple times from 4 subjects. We present the following major contributions in this paper: First, the differentiation between the wrist and grip movements has been performed by using imaginary data as well as the real movements. Secondly, we have tested multiple algorithms for feature extraction and classification and used ESN with optimized parameters for best results. This paper is organized as follows: section 2 describes a low-cost BCI setup for EEG, section 3 deals with the DAQ protocol, section 4 explains the echo state network and its optimization while section 5 discusses results obtained in this research. Section 6 concludes the paper.

Brain Computer Interface Design

Brain-Computer Interface (BCI) design requires a multi-disciplinary approach for engineers to observe EEG data. Today, a number of sensing platforms are available which provide a low-cost solution for high-resolution data acquisition. Developing a BCI interface requires a two-step approach namely the acquisition and the real-time processing. In off-line processing, the only requirement is to do the acquisition. The data is acquired via a wireless network from the pick-off electrodes arranged on the scalp of the subjects [17]. One such available system is Emotiv, which is easy to install and use. Emotiv headset with 14 electrodes and 2 reference electrodes, CMD and DRL, is used to collect data as shown in Figure 1. All electrodes have potential with respect to the reference electrode. Emotiv headset is a non-invasive device to collect the EEG data as preferred in most of the diagnosis and rehabilitation applications [18].

biomedres-Emotiv-EEG

Figure 1. Emotiv EEG acquisition using P-300 standard.

It is important to understand the EEG signal format and frequency content for pre-processing and offline classification. Table 1 shows some of the indications of physical movements and mind actions associated with different brain rhythms in somewhat overlapping frequency bands. It is obvious that the motor imagery tasks are associated with the μ-rhythm in 8-13 Hz frequency band [19].

Rhythm Frequency
(Hz)
Indication Diagnosis
Δ 0-4 Deep sleep stage Hypoglycaemia, Epilepsy
υ 4-7 Initial sleep stage
α 8-12 Closure of eyes Migraine, Dementia
β 12-30 Busy/Anxious thinking Encephalopathies, Tonic seizures
γ 30-100 Cognitive/motor function
µ 8-13 Motor imagery tasks Autism Spectrum Disorder

Table 1. Brain frequency bands and their significance.

biomedres-Grip-movement

 

Continue —> Classification of EEG signals for wrist and grip movements using echo state network

, , , , , , , , ,

Leave a comment

%d bloggers like this: