Archive for April, 2019

[NEWS] Video Game-Integrated Training Device Helps Stroke Survivors Regain Arm Function

Published on 

Word writing text Rehab. Business concept for course treatment for drug alcohol dependence typically at residential

A new video game-led training device called a myoelectric computer interface (MyoCI), invented by Northwestern Medicine scientists, is enabling severely impaired stroke survivors to regain function in their arms after sometimes decades of immobility.

When integrated with a customized video game, the device helped retrain stroke survivors’ arm muscles into moving more normally. Most of the 32 study participants experienced increased arm mobility and reduced arm stiffness while using it, and retained their arm function a month after finishing the training, according to a study published recently in Neurorehabilitation and Neural Repair.

Many stroke survivors can’t extend their arm forward with a straight elbow because the muscles act against one another in abnormal ways, called “abnormal co-activation” or “abnormal coupling.”

The Northwestern device identifies which muscles are abnormally coupled and retrains the muscles into moving normally by using their electrical muscle activity (called electromyogram, or EMG) to control a cursor in a customized video game. The more the muscles decouple, the higher the person’s score, a media release from Northwestern University explains.

“We gamified the therapy into an ’80s-style video game,” says senior author Dr Marc Slutzky, associate professor of neurology and of physiology at Northwestern University Feinberg School of Medicine and a Northwestern Medicine neurologist. “It’s rather basic graphics by today’s standards, but it’s entertaining enough.”

“The beauty of this is even if the benefit doesn’t persist for months or years, patients with a wearable device could do a ‘tune-up’ session every couple weeks, months or whenever they need it,” adds Slutzky, whose team designed the original device. “Long-term, I envision having flexible, fully wireless electrodes that an occupational therapist could quickly apply in their office, and patients could go home and train by themselves.”

Slutzky also is studying this method on stroke patients in the hospital, starting within a week of their stroke.

Abnormal coupling of muscles leaves many stroke patients with a bent elbow, which makes it difficult to benefit from typical task-based stroke-rehabilitation therapies, such as training on bathing, getting dressed and eating.

Only about 30% of stroke patients in the United States receive therapy after their initial in-patient rehabilitation stay, often because their injury is too severe to benefit from standard therapy, it costs too much, or they’re too far from a therapist. This small, preliminary study lays the groundwork for inexpensive, wearable, at-home therapy options for severely impaired stroke survivors, the release continues.

“We’re still in the very early stages, but I’m hopeful this may be an effective new type of stroke therapy,” Slutzky states. “The goal is to one day let patients buy the training device inexpensively, potentially without even needing insurance and use it wirelessly in their home.”

Patients in the study were severely impaired – could only slightly move their arm and extend their elbow – and had had their stroke at least 6 months prior to beginning the study. The average patient was more than 6 years out from their stroke, and some were decades out.

After Slutzky’s intervention, study participants could, on average, extend their elbow angle by 11 degrees more than before the intervention, which was a pleasant surprise, Slutzky comments.

This type of treatment only requires a small amount of muscle activation, which is advantageous for severely impaired stroke patients who typically can’t move enough to even begin standard physical therapy. It also gives feedback to the patient if they’re activating their muscles properly.

To identify which muscles were abnormally coupled, study participants attempted to reach out to multiple different targets while the scientists recorded the electrical activity in eight of their arm muscles using electrodes attached to the skin. For example, the biceps and anterior deltoid muscles in the arm often activated together in stroke participants, while they normally shouldn’t.

Then, to retrain the muscles into moving normally (ie, without abnormally co-activating), the participants used their electrical muscle activity to control a cursor in a customized video game. The two abnormally coupled muscles moved the cursor in either horizontal or vertical directions, in proportion to their EMG amplitude, the release continues.

For example, if the biceps would contract in isolation, the cursor would move up. If the anterior muscles would contract in isolation, the cursor would move to the side. But if the muscles would contract together, the cursor would move diagonally.

The goal was to move the cursor only vertically or horizontally – not diagonally – to acquire targets in the game. To get a high score, participants had to learn to decouple the abnormally coupled muscles.

Muscles tend to produce more electrical muscle activity when contracting isometrically (without moving) compared to when moving the arm freely, but the ultimate goal of this training is to enable home use. One goal of this study was to see if participants could benefit without restraining the arm as much as with restraining the arm.

Participants were broken into three groups: 60 minutes of training with their arm restrained; 90 minutes of training with their arm restrained; and 90 minutes of training without arm restraints. Overall, arm function improved substantially, in all groups and there was no significant difference between the three groups, the release concludes.

[Source(s): Northwestern University, News-Medical Life Sciences]


via Video Game-Integrated Training Device Helps Stroke Survivors Regain Arm Function – Rehab Managment

, , , , , ,

Leave a comment

[WEB SITE] An AFO for Every Need


New York OMC offers a proven collection of AFOS. High quality products offered by New York OMC include STI-Dynamic AFO, STI-Dynamic Overlap Joint, BRK-1 AFO, BRX-1 AFO, QNS-1 AFO, and Superflex AFO. The STI-Dynamic AFO treats indications PTTD, ankle injuries and sprains, medial and lateral ankle instability, subtalar joint instability, sinus tarsi, and foot drop. The STI-Dynamic Overlap Joint features custom molded overlap uprights, three Velcro closures, soft interface lining, high medial and lateral flanges to increase support, and hind foot and forefoot posting. The BRK-1 AFO utilizes rigid, semi-rigid, or flexible reinforcement materials to match the exact diagnosis with the appropriate amount of support required. The BRX-1 AFO is suitable for indications PTTD, charcot deformities, ankle instabilities, abnormal ankle alignment traumatic injury, and arthritis; and is available with lace, Velcro, or boot hooks closure. The Superflex AFO treats many of the same indications as the BRX-1 AFO and is built with durable vegetable dyed leather and full interface lining cushions to protect fragile skin. The QNS-1 AFO is built with a 1” custom padded collar for increased patient comfort and compliance; the flexible-molded integrated AFO allows for dynamic biomechanical function.

via An AFO for Every Need | Lower Extremity Review Magazine

Leave a comment

[ARTICLE] Technology-based cognitive training and rehabilitation interventions for individuals with mild cognitive impairment: a systematic review



Individuals with mild cognitive impairment (MCI) are at heightened risk of developing dementia. Rapid advances in computing technology have enabled researchers to conduct cognitive training and rehabilitation interventions with the assistance of technology. This systematic review aims to evaluate the effects of technology-based cognitive training or rehabilitation interventions to improve cognitive function among individuals with MCI.


We conducted a systematic review using the following criteria: individuals with MCI, empirical studies, and evaluated a technology-based cognitive training or rehabilitation intervention. Twenty-six articles met the criteria.


Studies were characterized by considerable variation in study design, intervention content, and technologies applied. The major types of technologies applied included computerized software, tablets, gaming consoles, and virtual reality. Use of technology to adjust the difficulties of tasks based on participants’ performance was an important feature. Technology-based cognitive training and rehabilitation interventions had significant effect on global cognitive function in 8 out of 22 studies; 8 out of 18 studies found positive effects on attention, 9 out of 16 studies on executive function, and 16 out of 19 studies on memory. Some cognitive interventions improved non-cognitive symptoms such as anxiety, depression, and ADLs.


Technology-based cognitive training and rehabilitation interventions show promise, but the findings were inconsistent due to the variations in study design. Future studies should consider using more consistent methodologies. Appropriate control groups should be designed to understand the additional benefits of cognitive training and rehabilitation delivered with the assistance of technology.


Due to the aging of the world’s population, the number of people who live with dementia is projected to triple to 131 million by the year 2050 []. Development of preventative strategies for individuals at higher risk of developing dementia is an international priority []. Mild cognitive impairment (MCI) is regarded as an intermediate stage between normal cognition and dementia []. Individuals with MCI usually suffer with significant cognitive complaints, yet do not exhibit the functional impairments required for a diagnosis of dementia. These people typically have a faster rate of progression to dementia than those without MCI [], but the cognitive decline among MCI subjects has the potential of being improved []. Previous systematic reviews of cognitive intervention studies, both cognitive training and cognitive rehabilitation, have demonstrated promising effects on improving cognitive function among subjects with MCI [].

Recently, rapid advances in computing technology have enabled researchers to conduct cognitive training and rehabilitation interventions with the assistance of technology. A variety of technologies, including virtual reality (VR), interactive video gaming, and mobile technology, have been used to implement cognitive training and rehabilitation programs. Potential advantages to using technology-based interventions include enhanced accessibility and cost-effectiveness, providing a user experience that is immersive and comprehensive, as well as providing adaptive responses based on individual performance. Many computerized cognitive intervention programs are easily accessed through a computer or tablet, and the technology can objectively collect data during the intervention to provide real-time feedback to participants or therapists. Importantly, interventions delivered using technology have shown better effects compared to traditional cognitive training and rehabilitation programs in improving cognitive function and quality of life []. The reasons for this superiority are not well-understood but could be related to the usability and motivational factors related to the real-time interaction and feedback received from the training system [].

Three recent reviews of cognitive training and rehabilitation for use with individuals with MCI and dementia suggest that technology holds promise to improve both cognitive and non-cognitive outcomes []. The reviews conducted by Coyle, et al. [] and Chandler, et al. [] were limited by accessing articles from only two databases, and did not comprehensively cover available technologies. Hill, et al. [] limited their review to papers published until July 2016 and included only older adults aged 60 and above. More technology-based intervention studies have been conducted since then, and only including studies with older adults 60 and above could limit the scope of the review given that adults can develop early-onset MCI in their 40s []. Therefore, the purpose of this review is to 1) capture more studies using technology-based cognitive interventions by conducting a more comprehensive search using additional databases 2) understand the effect of technology-based cognitive interventions on improving abilities among individuals with MCI; and 3) examine the effects of multimodal technology-based interventions and their potential superiority compared to single component interventions.[…]


Continue —-> Technology-based cognitive training and rehabilitation interventions for individuals with mild cognitive impairment: a systematic review

, , , , , ,

Leave a comment

[BLOG POST] Sleep Evaluation and Treatment Support Patient Outcome

(Note:  In this guest blog from Grace Griesbach, Ph.D., and CNS’ National Director of Clinical Research, she explains that proper sleep is a vital component in the rehabilitation of brain injury).

Historically, quotes referring to sleep have been associated with well-being. This is not without substance. The importance of sleep is appreciated when one considers that it is observed across the vast majority of animal species. In humans and other higher mammals, lack of sleep has been demonstrated to impact physical, cognitive and emotional functions negatively. Physical consequences of sleep deprivation include compromised immune responses, as well as hormonal and metabolic alterations that in turn will impact overall health. Sleep also promotes emotional and psychological well-being. As for cognitive functions, sleep has been shown to facilitate learning and memory.

Moreover, animal studies have shown that neural plasticity changes allow for better memory to occur during sleep. Sleep driven neural plasticity is also evident during brain development and during times when healing is necessary. Given the multiple functions of sleep, it is evident that sleep-related problems should not be ignored.

Unfortunately, the prevalence of sleep disorders following brain injury is notably higher compared to the general population. Many of those that have endured a traumatic brain injury or stroke have difficulty initiating or maintaining sleep. Daytime sleepiness (hypersomnia) and fatigue are frequently reported complaints that are associated with insomnia. Apnea, a common breathing-related sleep disorder, is frequently observed during the chronic brain injury period. Apnea is defined as breathing cessation for fixed periods during sleep and contributes to arousals throughout the night; promoting fragmented sleep.

Sleep follows a particular overnight pattern consisting of repeated sleep cycles. Each cycle is comprised of one rapid eye movement (REM) stage and three non-REM stages. These stages are defined by different brain activity patterns that have been associated with particular physiological and neural plasticity processes.

Studies focused on proper sleep closely examine brain wave activity and body physiology throughout the various sleep stages. Some stages are particularly important for memory, emotional well-being, and cognitive function, and may be compromised by interrupted sleep. The golden standard of evaluating sleep is with an overnight polysomnography study performed by a certified sleep technologist. The technologist places electrodes on the scalp of the patient to record brain activity. Breathing, heart rate, oxygen levels, and limb movement are also recorded during sleep. Results from these recordings are sent to a board-certified sleep medicine physician, who creates a report on the diagnosis and a treatment plan.

Centre for Neuro Skills (CNS) offers a comprehensive multidisciplinary approach to rehabilitation. This entails addressing key factors that impact recovery such as sleep. CNS has opened sleep laboratories within the residential buildings of our programs in Dallas, Texas and Bakersfield, California. All CNS facilities can arrange for a sleep evaluation at one of the labs, based on a patient’s needs and treatment plan. Sleep evaluations of CNS patients allow for the detection of sleep-related issues that are likely to hinder recovery. CNS sleep facilities also provide research opportunities to deepen understanding of sleep-related issues after brain injury. Findings from these studies will help improve treatment and develop new therapeutic strategies.


via Sleep Evaluation and Treatment Support Patient Outcome – Neuro Landscape

, , , , , ,

Leave a comment

[JUST ACCEPTED] “Increased Sensorimotor Cortex Activation with Decreased Motor Performance during Functional Upper Extremity Tasks Post-Stroke” – Abstract

The following article has just been accepted for publication in Journal of Neurologic Physical Therapy.

“Increased Sensorimotor Cortex Activation with Decreased Motor Performance during Functional Upper Extremity Tasks Post-Stroke”

By Shannon B Lim, MSc, MPT; Janice J Eng

Provisional Abstract:

Background: Current literature has focused on identifying neuroplastic changes associated with stroke through tasks and in positions that are not representative of functional rehabilitation. Emerging technologies such as functional near-infrared spectroscopy (fNIRS) provide new methods of expanding the area of neuroplasticity within rehabilitation.
Purpose: This study determined the differences in sensorimotor cortex activation during unrestrained reaching and gripping after stroke.
Methods: 11 healthy and 11 chronic post-stroke individuals completed reaching and gripping tasks under three conditions using their 1) stronger, 2) weaker, and 3) both arms together. Performance and sensorimotor cortex activation using fNIRS were collected. Group and arm differences were calculated using mixed ANCOVA (covariate: age). Pairwise comparisons were used for post-hoc analyses. Partial Pearson’s correlations between performance and activation were assessed for each task, group, and hemisphere.
Results: Larger sensorimotor activations in the ipsilesional hemisphere were found for the stroke compared to healthy group for reaching and gripping conditions despite poorer performance. Significant correlations were observed between gripping performance (with the weaker arm and both arms simultaneously) and sensorimotor activation for the stroke group only.
Discussion: Stroke leads to significantly larger sensorimotor activation during functional reaching and gripping despite poorer performance. This may indicate an increased sense of effort, decreased efficiency, or increased difficulty after stroke.
Conclusion: fNIRS can be used for assessing differences in brain activation during movements in functional positions after stroke. This can be a promising tool for investigating possible neuroplastic changes associated with functional rehabilitation interventions in the stroke population.

Supplemental Digital Content 1. Video abstract .mp4

Want to read the published article?
To be alerted when this article is published, please sign up for the Journal of Neurologic Physical Therapy eTOC.


via JUST ACCEPTED: “Increased Sensorimotor Cortex Activation with Decreased Motor Performance during Functional Upper Extremity Tasks Post-Stroke”

, , , , , , ,

Leave a comment

[Editorial] Functional brain mapping of epilepsy networks: methods and applications – Neuroscience

This multidisciplinary research topic is a collection of contemporary advances in neuroimaging applied to mapping functional brain networks in epilepsy. With technology such as simultaneous electroencephalography and functional magnetic resonance imaging (EEG-fMRI) now more readily available, it is possible to non-invasively map epileptiform activity throughout the entire brain at millimetre resolution. This research topic includes original research studies, technical notes and reviews of the field. Due to the multidisciplinary nature of the domain, the topic spans two journals: Frontiers in Neurology (Section: Epilepsy) and Frontiers in Neuroscience (Section: Brain Imaging Methods).
In this editorial we consider the outcomes of the multidisciplinary work presented in the topic. With the benefit of time elapsed since the original papers were published, we can see that the works are making a substantial impact in the field. At the time of writing, this topic had well over 27,000 full-paper downloads (including over 18,000 for the 15 papers in the Epilepsy section, and over 9,000 for the 8 papers in the Brain Imaging Methods section). Several papers in the topic have climbed the tier in Frontiers and received an associated invited commentary, demonstrating there is substantial interest in this research area.
The topic’s review papers set the scene for the original research papers and synthesise contemporary thinking in epilepsy research and neuroimaging methods. We see that Epilepsy, whether of a “generalised” or “focal” origin, is increasingly recognised as a disorder of large-scale brain networks. At one level it is self-evident that otherwise healthy functional networks are recruited during epileptic activity, as this is what generates patient perceptions of their epileptic aura. For example, the epileptic aura of mesial temporal lobe epilepsy can include an intense sensation of familiarity (déjà vu) associated with involvement of the hippocampus, and unpleasant olfactory auras which may reflect involvement of adjacent olfactory cortex. As seizures spread more widely throughout the brain, presumably along pre-existing neural pathways, patients lose control of certain functions; for example, their motor system in the case of generalised convulsions, or aspects of awareness in seizures that remain localised to non-motor brain regions. Yet these functions return when the seizure abates, implying involved brain regions are also responsible for normal brain function. What has been less clear, and difficult to investigate until the advent of functional neuroimaging, is precisely which brain networks are involved (especially in ‘generalised’ epilepsy syndromes), and the extent to which functional networks are perturbed during seizures, inter-ictal activity, and at other times.
Functional imaging evidence of brain abnormalities in temporal lobe epilepsy is explored in (Caciagli et al., 2014), including evidence of dysfunction in limbic and other specific brain networks, as well as global changes in network topography derived from resting-state fMRI. Archer et al systematically review the functional neuroimaging of a particularly severe epilepsy phenotype, Lennox-Gastaut Syndrome (LGS), illustrating well how different forms of brain pathology can manifest in a similar clinical phenotype, simply by the nature of the healthy networks that the underlying pathology perturbs (Archer et al., 2014). Similarly, the mechanisms of absence seizure generation are reviewed by (Carney and Jackson, 2014), revealing that it too has a signature pattern of large-scale functional brain network perturbation. The ability to make such observations has considerable clinical significance, as highlighted in the review by (Pittau et al., 2014).
The tantalising proposition that there may be a common treatment target for all focal epilepsy phenotypes is also explored in a review of the piriform cortex by (Vaughan and Jackson, 2014). The piriform cortex was first implicated as a common brain region associated with spread of interictal discharges in focal epilepsy in an experiment that analysed the spatially normalised functional imaging data of a heterogeneous group of focal epilepsy patients (Laufs et al., 2011). This finding, since replicated (Flanagan et al., 2014), led Vaughan & Jackson to explore in detail what is known of the piriform cortex. Their findings reveal the piriform has several features that likely predispose it to involvement in focal epilepsy, and features that also explain many of the peculiar symptoms experienced by patients, from olfactory auras to the characteristic nose-wiping that many patients perform postictally. This work points to the need for future studies to determine whether the piriform might be an effective target for deep brain stimulation or other targeted therapy to prevent the spread of epileptiform activity.
Original research
Temporal lobe epilepsy is investigated in several papers in this topic. One of these studies also introduces a new exploratory method, Shared and specific independent component analysis (SSICA), that builds upon independent component analysis to perform between-group network comparison (Maneshi et al., 2014). In application to mesial temporal lobe epilepsy (MTLE) and healthy controls, three distinct reliable networks were revealed: two that exhibited increased activity in patients (a network including hippocampus and amygdala bilaterally, and a network including postcentral gyri and temporal poles), and a network identified as specific to healthy controls (i.e. effectively decreased in patients, consisting of bilateral precuneus, anterior cingulate, thalamus, and parahippocampal gyrus). These finding give mechanistic clues to the cognitive impairments often reported in patients with MTLE. Further clues are revealed in a study of the dynamics of fMRI and its functional connectivity (Laufs et al., 2014). Compared to healthy controls, temporal variance of fMRI was seen to be most increased in the hippocampi of TLE patients, and variance of functional connectivity to this region was increased mainly in the precuneus, the supplementary and sensorimotor, and the frontal cortices. More severe disruption of connectivity in these networks during seizures may explain patients’ cognitive dysfunction (Laufs et al., 2014). Yang and colleagues also show that it may be possible to use fMRI functional connectivity to lateralise TLE (Yang et al., 2015), which could be a useful clinical tool.
Mechanistic explanations of symptomatology beyond the seizure onset zone can also be revealed with conventional nuclear medicine techniques such as 18F-FDG-PET. This is demonstrated in a study of Occipital Lobe Epilepsy by Wong and colleagues, who observed that patients with automatisms have metabolic changes extending from the epileptogenic occipital lobe into the ipsilateral temporal lobe, whereas in patients without automatisms the 18F-FDG-PET was abnormal only in the occipital lobe (Wong et al., 2014).
The clinical significance of the ability to non-invasively study functional brain networks extends to understanding the impact of surgery on brain networks. This Frontiers research topic includes an investigation by Doucet and colleagues revealing that temporal lobe epilepsy and surgery selectively alter the dorsal, rather than the ventral, default-mode network (Doucet et al., 2014).
Another approach to better understand the mechanisms of seizure onset and broader symptomatology is computational modelling. It can track aspects of neurophysiology than cannot be readily measured: for example effective connectivity and mean membrane potential dynamics are shown by (Freestone et al., 2014) to be estimable using model inversion. In a proof-of-principle experiment with simulated data, they demonstrate that by tailoring the model to subject-specific data, it may be possible for the framework to identify a seizure onset site and the mechanism for seizure initiation and termination. Also in this topic, Petkov and colleagues utilise a computational model of the transition into seizure dynamics to explore how conditions favourable for seizures relate to changes in functional networks. They find that networks with higher mean node degree are more prone to generating seizure dynamics in the model, thus providing a mathematical mechanistic explanation for increasing node degree causing increased ictogenicity (Petkov et al., 2014).
Seizure prediction is an area of considerable research, and in this topic Cook and colleagues reveal intriguing characteristics in the long-term temporal pattern of seizure onset. They confirmed that human inter-seizure intervals follow a power law, and they found evidence of long-range dependence. Specifically, the dynamics that led to the generation of a seizure in most patients appeared to be affected by events that took place much earlier (as little as 30 minutes prior and up to 40 days prior in some patients) (Cook et al., 2014). The authors rightly note that this information could be valuable for individually-tuned seizure prediction algorithms.
Several methodological papers in this Frontiers Topic prove there remains considerable potential to improve neuroimaging methods as applied to the study of epilepsy. For example, (Mullinger et al., 2014) reveal the critical importance of the accuracy of physical models if one is to optimise lead positioning in functional MRI with simultaneous EEG. Confirming with computer modelling and phantom measurements that lead positioning can have a substantial effect on the amplitude of the MRI gradient artefact present on the EEG, they optimised the positions in a novel cap design. However, whilst this substantially reduced gradient artefact amplitude on the phantom, it made things worse when used on human subjects. Thus, improvement is required in model accuracy if one is to make accurate predictions for the human context.
Reduction of artefact, particularly cardioballistic and non-periodic motion artefact, remains a challenge for off-the-shelf MRI-compatible EEG systems. However, for over a decade, the Jackson group in Melbourne has dealt well with this issue using insulated carbon-fibre artefact detectors, physically but not electrically attached to the scalp (Masterton et al., 2007). In the present topic, they provide detailed instructions for building such detectors and interfacing them with a commercially available MRI-compatible EEG system (Abbott et al., 2015). This team also previously developed event-related ICA (eICA), to map fMRI activity associated with inter-ictal events observed on EEG (Masterton et al., 2013b). The method is capable of distinguishing separate sub-networks characterised by differences in spatio-temporal response (Masterton et al., 2013a). The eICA approach frees one from assumptions regarding the shape of the time-course of the neuronal and haemodynamic response associated with inter-ictal activity (which can vary according to spike type, can vary from conventional models and may include pre-spike activity (Masterton et al., 2010); issues explored further in the present topic by (Faizo et al., 2014) and (Jacobs et al., 2014)). However, the effectiveness of eICA can be affected by fMRI noise or artefact. In the present topic we see that application of a fully automated de-noising algorithm (SOCK) is now recommended, as it can substantially improve the quality of eICA results (Bhaganagarapu et al., 2014).
The ability to detect activity associated with inter-ictal events can also be improved with faster image acquisition. Magnetic Resonance Encephalography (MREG) is a particularly fast fMRI acquisition method (TR=100ms) that achieves its speed using an under-sampled k-space trajectory (Assländer et al., 2013; Zahneisen et al., 2012). This has now been applied in conjunction with simultaneous EEG, to reveal that the negative fMRI response in the default-mode network is larger in temporal compared to extra-temporal epileptic spikes (Jacobs et al., 2014).
The default mode network and its relationship to epileptiform activity is also examined in several other papers in this topic. In a pilot fMRI connectivity study of Genetic Generalised Epilepsy and Temporal Lobe Epilepsy patients, (Lopes et al., 2014) observed that intrinsic connectivity in portions of the default mode network appears to increase several seconds prior to the onset of inter-ictal discharges. The authors suggest that the default mode network connectivity may facilitate IED generation. This is plausible, although causality is difficult to establish and it is possible that something else drives both the connectivity and EEG changes (Abbott, 2015).
Complicating matters further is the question of what connectivity means. There are many ways in which connectivity can be assessed. Jones and colleagues have discovered that some of these do not necessarily correlate well with each other. They examined connectivity between measurements made with intracranial electrodes, connectivity assessed using simultaneous BOLD fMRI and intracranial electrode stimulation, connectivity between low-frequency voxel measures of fMRI activity, and a diffusion MRI measure of connectivity – an integrated diffusivity measure along a connecting pathway (Jones et al., 2014). They found only mild correlation between these four measures, implying they assess quite different features of brain networks. More research in this domain would therefore be valuable.
Whatever the measure of connectivity utilised, most evidence of alterations in connectivity in epilepsy has been obtained from comparison of a group of patients with a group of healthy controls. However, a new method called Detection of Abnormal Networks in Individuals (DANI) is now proposed by (Dansereau et al., 2014). This method is designed to detect the organisation of brain activity in stable networks, which the authors call modularity. The conventional definition of modularity refers to the degree to which networks can be segregated into distinct communities, usually estimated by maximising within-group nodal links, and minimising between group links (Girvan and Newman, 2002; Rubinov and Sporns, 2010). Dansereau take a novel approach to this concept, instead evaluating the stability of each resting state network across replications of a bootstrapped clustering method (Bellec et al., 2010). In the DANI approach, the degree to which an individual’s functional connectivity modular pattern deviates from a population of controls is quantified. Whilst application of the method to epilepsy patients is preliminary, significant changes were reported likely related to the epileptogenic focus in 5 of the 6 selected focal epilepsy patients studied. In several patients, modularity changes in regions distant from the focus were also observed, adding further evidence that the pervasive network effects of focal epilepsy can extend well beyond the seizure onset zone.
When it comes to application of EEG-fMRI to detect the seizure onset zone, there is typically a trade-off between specificity and sensitivity, with the added complication that activity or network changes may also occur in brain regions other than the ictal onset zone. The distant activity may be due to activity propagation from the onset zone, pervasive changes in functional networks creating a ‘permissive state’, or in some cases might be the brain’s attempt to prevent seizures. Specificity and sensitivity of EEG-fMRI to detect the ictal onset zone is explored by (Tousseyn et al., 2014). They determined how rates of true and false positives and negatives varied with voxel height and cluster size thresholds, both for the full statistical parametric map, and for the single cluster that contained the voxel of maximum statistical significance. The latter conferred the advantage of reducing positives remote from the seizure onset zone. As a result, it appeared to be more robust to variations in statistical threshold than analysis of the entire map. One needs to be cautious however, given the small numbers of patients studied, and the fact that the “optimal” settings were determined using receiver operator characteristic curves of the same study data. It remains to be seen how well this might generalise to a different study.
Perhaps the greatest potential for future advancement in EEG-fMRI is in methods to make the most of the all the information captured by each modality. This is highlighted by the work of Deligianni et al, demonstrating with a novel analysis framework the potential to obtain more information on the human functional connectome by utilising EEG and fMRI together (Abbott, 2016; Deligianni et al., 2014).
We hope that you enjoy this collection of papers providing a broad snapshot of advances in brain mapping methods and application to better understand epilepsy.

via Frontiers | Editorial: Functional brain mapping of epilepsy networks: methods and applications | Neuroscience

, , , , , , , , , , , , ,

Leave a comment

[ARTICLE] A Systematic Review of Usability and Accessibility in Tele-Rehabilitation Systems – Full Text


The appropriate development of tele-rehabilitation platforms requires the involvement and iterative assessments of potential users and experts in usability. Usability consists of measuring the degree to which an interactive system can be used by specified final users to achieve quantified objectives with effectiveness, efficiency, and satisfaction in a quantified context of use. Usability studies need to be complemented by an accessibility assessment. Accessibility indicates how easy it is for a person to access any content, regardless of their physical, educational, social, psychological, or cultural conditions. This chapter intends to conduct a systematic review of the literature on usability and accessibility in tele-rehabilitation platforms carried out through the PRISMA method. To do so, we searched in ACM, IEEE Xplore, Google Scholar, and Scopus databases for the most relevant papers of the last decade. The main result of the usability shows that the user experience predominates over the heuristic studies, and the usability questionnaire most used in user experience is the SUS. The main result of the accessibility reveals that the topic is only marginally studied. In addition, it is observed that Web applications do not apply the physical and cognitive accessibility standards defined by the WCAG 2.1.

1. Introduction

Innovation and technological advances involve the offering of valuable products and services to improve the quality of life of citizens. In recent decades, the domain of telemedicine has reported advances in the control, monitoring and evaluation of various clinical conditions [1]. In the field of rehabilitation, numerous studies and state-of-the-arts from informatics perspective [2] and different areas of application [34], show the effectiveness and advantages of the use of remote rehabilitation (or tele-rehabilitation) [56]. Tele-rehabilitation aims to reduce the time and costs of offering rehabilitation services. The main objective is to improve the quality of life of patients [7]. Tele-rehabilitation cannot replace traditional neurological rehabilitation [8]. It is considered as a partial replacement of face-to-face physical rehabilitation [9]. Tele-rehabilitation uses mainly two groups of technologies: (1) wearable devices and (2) vision-based systems based on depth cameras and intelligent algorithms [10]. In [5], the authors describe and analyze some characteristics and typical requirements tele-rehabilitation systems.

Design and conception of tele-rehabilitations platforms that do not consider guidelines, metrics, patterns, principles, or practice success factors can affect the access to the service, the effectiveness, quality, and usefulness. It can cause problems of confusion, error, stress, and abandonment of the rehabilitation plan. Therefore, guaranteeing the correct use of these applications implies to incorporate different studies of usability in the life cycle of the interactive system. For this reason, aspects of human factors engineering in tele-rehabilitation systems have been studied with the aim of providing accessible, efficient, usable and understandable systems [1112].

User-centered agile development (UCD) approaches allows developers to specify and design the set of interfaces of any interactive system in a flexible and effective way [1314]. The agile development life cycle centered on user experience (UX-ADLC) allows iteratively evaluating system interfaces based on the results of the previous iteration. The evaluation also includes the errors and usability problems encountered [15]. Thus, usability studies are an essential aspect of technology development [16]. This is the reason why designers need to meet usability and user experience objectives while adhering to agile principles of software development. Formative and summative usability tests are methods of evaluating software products widely adopted in user-centered design (UCD) [15] and agile UX development lifecycle. Both approaches are frequently used in the development of software applications. Rapid formative usability should be carried out so as to fulfill UX goals while satisfying end users’ needs. Formative usability is used as an iterative test-and-refine method performed in the early steps of a design process, in order to detect and fix usability problems [15]. Summative usability allows for assuring, in later phases of the design, the quality of the user experience (UX) for a software product in development. The focus is on short work periods (or iterations) where usability tests (formative and summative) must be contemplated. This means that quick formative usability tests should be carried out to fulfill UX goals [17].

The ISO 9241-11 standard [18] is a framework for understanding and applying the concept of usability to situations in which people use interactive systems and other types of systems (including built environments), products (including industrial and consumer products) and services (including technical and personal services). Likewise, the usability standard ISO 9241-11 facilitates the measurement of the use of a product with the aim of achieving specific objectives with effectiveness, efficiency and satisfaction in a context of specific use [18].

Usability can be studied through software evaluation methods widely accepted in user centered design (UCD) [15]. It can be formative or summative [8]. Formative usability consists of a set of iterative tests carried out in the early stages of the design process. The aim of the tests is to refine and improve the software product, as well as to detect and solve potential usability problems. As a complement, the summative usability allows to obtain an evaluation of the user experience (UX) for a software product in development. Formative usability facilitates decision making during the design and development of the product, while summative usability is useful when studying user experience (UX).

Tullis and Stetson [19] evaluated the effectiveness of the most used questionnaires to measure the summative usability. The authors found that the System Usability Scale (SUS) [20] and the IBM Computer System Usability Questionnaire (CSUQ) [21] are the most effective. SUS provides a quick way for measuring the usability through user experience. It consists of a 10-item questionnaire with 5-likert scale range from “Strong Agree” to “Strongly Disagree.” The CSUQ focuses on three main aspects: (1) the utility, which refers to the opinion of users regarding the ease of use, the ease of learning, the speed to perform the operations, the efficiency in completing tasks and subjective feeling; (2) the quality of the information which studies the subjectivity of the user regarding the management of system errors, the clarity of the information and the intelligibility; and finally, (3) the quality of the interface which measures the affective component of the user’s attitude in the use of the system.

Large part of the tasks in the tele-rehabilitation systems are carried out by patients who require to treat a temporary disability. Considering the special needs of these users, usability evaluations alone cannot guarantee an appropriate design of the system. On the contrary, accessibility studies can provide the mechanisms to offer the same means of use to all users of any interactive system. A study combining usability and accessibility was presented in [22]. The study analyzes how remote and/or video monitoring technologies affect the accessibility, effectiveness, quality and usefulness of the services offered by tele-rehabilitation systems. To do this, the authors provide an overview of the fundamentals necessary for the analysis of usability, in addition to analyzing the strengths and limitations of various tele-rehabilitation technologies, considering how technologies interact with the clinical needs of end users such as accessibility, effectiveness, quality and utility of the service [22].

For many people, the Web is a fundamental part of everyday life. Therefore, a fundamental aspect to ensure the inclusivity of a Website is its accessibility. For example, people who cannot use their arms to write on their computer can use a mouth pencil [23]. Or someone who cannot listen well can use subtitles to understand a video. Also, a person who has a low vision can use a screen reader to listen what is written on the screen [24]. Therefore, Web accessibility means that people with disabilities can use the Web without any type of barriers [24]. There are several standards related to accessibility that provide guidelines and recommendations [25]. Some of the most important, according to the International Organization for Standardization (ISO), are the following ones:

  • ISO 9241: covers ergonomics of human-computer interaction.

  • ISO 14915 (software ergonomics for multimedia user interfaces): multimedia controls and navigation structure.

  • ISO CD 9241-151 (software ergonomics for World Wide Web user interfaces): designs of Web user interfaces.

  • ISO TS 16071 (guidance on accessibility for human-computer interface): recommendations for the design of systems and software applications that allows a greater accessibility to computer systems for users with disabilities.

  • ISO CD 9241-20: accessibility guideline for information communication, equipment and services.

The Web Accessibility Initiative (WAI) [26] from the World Wide Web Consortium (W3C) [27] develops Web Content Accessibility Guidelines (WCAG) [28] 2.0 (at present 2.1) that covers a wide range of recommendations for making Web contents more accessible. These guidelines were considered a standard in 2012, the ISO/IEC 40500. Complementary to these guidelines are the W3C User Agent Accessibility guidelines [29] (UAAG) and Authoring tool Accessibility guidelines [30] (ATAG), which addresses the current technological capabilities to modify the presentation based on the device capabilities and the preferences of the user.

The World Wide Web Consortium (W3C) provides international standards to make the Web as accessible as possible. It comprises the Web 2.0 Content Accessibility Guidelines (WCAG 2.0) [31], also known as the ISO 40500 [32], which are adapted to the European Standard called EN 301549 [33].

The current version of the accessibility guidelines is “Web Content Accessibility Guidelines 2.1” (WCAG 2.1) [23]. WCAG 2.1 consists of 4 principles, 13 guidelines and 76 compliance criteria. The four principles refer to [34].

Principle 1—perceptibility: refers to the good practices regarding the presentation of information and user interface components. It consists of 4 guidelines and 29 compliance criteria.

Principle 2—operability: the components of the user interface and navigation must be operable. It includes 5 guidelines and 29 compliance criteria.

Principle 3—comprehensibility: the information and user interface management must be understandable. It has 3 guidelines and 17 compliance criteria.

Principle 4—robustness: the content must be robust enough to rely on the interpretation of a wide variety of user agents, including assistive technologies. It includes a guideline and three compliance criteria.

Usability and accessibility can be combined to achieve the development of more accessible, efficient, equitable and universal tele-rehabilitation systems. This chapter presents a systematic literature review of summative and formative usability studies as well as accessibility studies in the context of tele-rehabilitation systems. The remaining of the manuscript is composed of four sections. Section 2 presents the method used to proceed with the systematic review. Section 3 is a description of the most relevant papers in usability applied to tele-rehabilitation. Section 4 describes the results regarding the accessibility. And Section 5 draws conclusions on the main findings of this literature review.[…]


Continue —> A Systematic Review of Usability and Accessibility in Tele-Rehabilitation Systems | IntechOpen

Figure 1.
PRISMA 2009 flow diagram chart that shows the selection process of the papers included in the literature review for usability.

, , , ,

Leave a comment

[Abstract] Model Predictive Control for Upper Limb Rehabilitation Robotic System Under Noisy Condition


Demands for rehabilitation robots are now increasing day by day due to increase in the number of patients with neural disorder. These robots help the patients in therapeutic exercise performing specific movements which leads to mitigating neural disorders through a gradual improvement of the patients’ limb performances. As robots are the best suitable options to perform repetitive tasks without the risks of monotony and fatigue failure, rehabilitation via robots have proven to be more of a comfortable exercise than an exhausting treatment procedure. Rehabilitation robots require precise and efficient control in terms of position and force, ensuring thus accuracy in exercise movements, ensuring with element of enjoyment patients’ safety. Nonlinear controllers make good option to this end as they adapt to handling the system uncertainties and parametric changes. This paper presents a Model Predictive Control (MPC) to control the rehabilitation robot for upper limb extremity under disturbed conditions. From results maximum overshoot of 1.4 and 1.0 and steady state error of 0.99 is found under disturbed and noisy condition respectively. Hence MPC proves to be a robust controller of external disturbances rejection and noise filtration.

via Model Predictive Control for Upper Limb Rehabilitation Robotic System Under Noisy Condition – IEEE Conference Publication

, , , , , , , , ,

Leave a comment

[ARTICLE] Boosting robot-assisted rehabilitation of stroke hemiparesis by individualized selection of upper limb movements – a pilot study – Full Text



Intensive robot-assisted training of the upper limb after stroke can reduce motor impairment, even at the chronic stage. However, the effectiveness of practice for recovery depends on the selection of the practised movements. We hypothesized that rehabilitation can be optimized by selecting the movements to be practiced based on the trainee’s performance profile.


We present a novel principle (‘steepest gradients’) for performance-based selection of movements. The principle is based on mapping motor performance across a workspace and then selecting movements located at regions of the steepest transition between better and worse performance.

To assess the benefit of this principle we compared the effect of 15 sessions of robot-assisted reaching training on upper-limb motor impairment, between two groups of people who have moderate-to-severe chronic upper-limb hemiparesis due to stroke. The test group (N = 7) received steepest gradients-based training, iteratively selected according to the steepest gradients principle with weekly remapping, whereas the control group (N = 9) received a standard “centre-out” reaching training. Training intensity was identical.


Both groups showed improvement in Fugl-Meyer upper-extremity scores (the primary outcome measure). Moreover, the test group showed significantly greater improvement (twofold) compared to control. The score remained elevated, on average, for at least 4 weeks although the additional benefit of the steepest-gradients -based training diminished relative to control.


This study provides a proof of concept for the superior benefit of performance-based selection of practiced movements in reducing upper-limb motor impairment due to stroke. This added benefit was most evident in the short term, suggesting that performance-based steepest-gradients training may be effective in increasing the rate of initial phase of practice-based recovery; we discuss how long-term retention may also be improved.


Upper-limb (UL) motor impairment is a common outcome of stroke that can severely hamper basic daily living activities [123]. Training-based therapy can promote recovery with the outcome depending on the intensity and duration of the intervention [456]. Robot-assisted training allows intense practice without increasing the individual’s dependence on a therapist and can improve clinical scores of UL motor capacity [789]. However, the effects are usually small and provide limited improvement in motor function, especially in more severe hemiparesis [67101112]. Identifying training methods that can boost outcome is thus vital. Considering the extent of effort and sophistication invested in robot-assisted technology (e.g. [1314]) perhaps it is time to focus on how to optimise its utility (in terms of training principles). Recent attempts have focussed on creating training scenarios which are more engaging or which simulate daily living activities. However, the evidence for the added benefit of this approach is mixed [15]. Another approach is to individualize the difficulty of the practised task (e.g. [1617]). This is based on the idea that motor improvement depends on the ability to ‘make sense’ of information related to performance [18], and postulates that matching the challenge (difficulty) level of the training task to the current ability of the trainee would optimise motor learning [19]. Individualizing task difficulty is commonly achieved by adjusting the parameters controlling task demands (e.g. movement speed or distance; or amount of assistance) across a pre-selected standard set of movements, to match the ability of the individual. Yet, so far there is little evidence for the added benefit of this approach for UL motor recovery. Hence, individually adjusting the task difficulty level might –by itself – not suffice for boosting UL rehabilitation outcome.

We hypothesised instead that appropriate selection of the practiced movements – in terms of the muscle coordination patterns – is a key for improving motor recovery. UL hemiparesis can affect various aspects of control. Thus, different motor impairments may benefit from different training movements. For example, training with movements involving mainly patterns of intact muscle coordination is unlikely to contribute much to improve other impaired movement patterns, regardless of the task difficulty level. Similarly, training that focuses only on movements that involve severely impaired muscle control may contribute little, even if the task can be performed by compensatory movements. Hence, to be optimally effective, individualized training may need to be expressed, not only by individually adjusting the level of difficulty of the task, but also in selecting tasks which are relevant for recovery. Little has been done to explore this possibility (for some attempts see [2021]). Here we present a novel method for performance-based selection of the set of movement tasks for robot-assisted training. The method depends on the availability of a motor performance “map” that profiles performance across a workspace. Movements are selected within intermediate levels of performance, based on the variation of performance across the map. Specifically, we predicted that optimal reduction of UL hemiparesis would be achieved by training with movements located at points on the map of steep transition (steep gradient) from high to low performance (Fig. 1), thus promoting the cascade of generalisation of motor improvement. Improved performance of movements at these steep gradient locations on the performance map would steer improvement in neighbouring, but more impaired regions, and encourage recovery. Here, we present evidence supporting this hypothesis.

Fig. 1

Fig. 1Illustrative sketch of the principle of selection of trained movements, based on the steepest gradients in a hypothetical motor performance profile (e.g. reaching aiming; vertical axis) measured across some particular task parameter (e.g. movement direction; horizontal axis); for simplicity, we show here a single dimension. The selected movements (grey horizontal bars) correspond to the regions with the steepest performance gradients, indicated by dashed ellipses. This movement selection principle can be applied where movement tasks can be defined by one or more continuous parameters, i.e. in a 1D, 2D, or higher dimensional map as long as the derivative of performance can be calculated. In this study we applied this principle on two measures of reaching performance (ability to move and ability to aim) each measured across two dimensions of the task (target location and movement direction)

To apply our method we first developed a novel principle of mapping of robot-assisted reaching performance across two dimensions of target location and movement direction [22], informing us about postural and movement-related aspects of motor control, respectively—key factors in the planning and execution of reaching movements [232425]. The performance maps then served to select movement sets for training, based on our “steepest gradients” principle. To test our hypothesis–namely, training based on that principle would lead to superior recovery–we compared the outcome of 15 sessions of robot-assisted training between two groups of people who have severe-to-moderate chronic UL hemiparesis due to stroke, differing only in the selection of trained movement. In one group the selection was based on the steepest performance gradients principle (updated weekly) whereas the other group was trained with a fixed set of centre-out reaching movements regardless of participant’s performance profile, as commonly used in robot-assisted UL therapy [26].[…]


Continue —-> Boosting robot-assisted rehabilitation of stroke hemiparesis by individualized selection of upper limb movements – a pilot study | Journal of NeuroEngineering and Rehabilitation | Full Text

Fig. 2Experimental design. a. The sessions in each of the 3 participation phases are shown, with different colours indicating different session type. CA: clinical assessment; Map: mapping session. The first CA also served for screening. b. Schematic description of the experimental setting (top view; adapted from [32]). The participant held the robot handle, with grip ensured by a glove (Active Hands Co Ltd) and arm supported against gravity (SaeboMass, Saebo Inc.; not shown), which—at the beginning of each trial – was gently moved by the robot to a start position (white on-screen disc). Next, a target appeared on the horizontal display (blue on-screen disc; here shown black) and the participant tried to reach the target within the allotted time as accurately as possible, with the robot providing assisting and guiding forces as needed at each moment. Hand position was indicated on-screen by a red disc (not shown here). The horizontal display occluded the hand and the manipulandum from vision. Participants wore a harness to restrict trunk movement, keeping their forehead on a padded headrest attached to the workstation frame. The assistive force (Assist) promoted slower-than-allowed movements and also impeded very fast rebound-like movements characterising high elbow flexor muscle tone. The guiding force (Guide) impeded lateral deviation from a straight path towards the target. An animated ‘explosion’ was presented at the end of each trial with its final radius indicating reach accuracy (not shown). Also, during training sessions a 4-bar histogram summary, shown after each block (84 trials), informed the participant about his or her ability to initiate movements, move, aim and reach the target (adopted from [16]). c. The reaching workspace used for mapping performance. The locations of the 8 targets are indicated by small open circles and are specified by angular coordinates relative to the centre. An example of the hand located at the 90otarget is shown. Participants made 5 cm reaches to each target from 8 start locations (indicated, for the example target, by small black dots and arrows), which were also specified in angular coordinates relative to the particular target. Note that the start coordinates therefore correspond to intended movement direction. The dashed circle indicates the extent of the mapped workspace, centred 24 cm in front of the headrest

, , , , , , , , , ,

Leave a comment

[REHABDATA] 20 apps for student success – National Rehabilitation Information Center

NARIC Accession Number: O21594.  What’s this? Download article in Full Text .
Author(s): O’Sullivan, Paige.
Project Number: 90RT5021 (formerly H133B130014).
Publication Year: 2017.
Number of Pages: 5.
Abstract: This list identifies software applications (apps) that may be helpful in key areas in which students with and without mental health conditions may need additional support. Some of these apps are only for use on desktops, while most are available on iPhones or Android products.

Can this document be ordered through NARIC’s document delivery service*?: Y.
Get this Document:

Citation: O’Sullivan, Paige. (2017). 20 apps for student success. Retrieved 4/19/2019, from REHABDATA database.via Articles, Books, Reports, & Multimedia: Search REHABDATA | National Rehabilitation Information Center

, , , , , , , , , , , ,

Leave a comment

%d bloggers like this: