Background: New technologies, such as telerehabilitation and gaming devices offer the possibility for patients to train at home. This opens the challenge of safety for the patient as he is called to exercise neither with a therapist on the patients’ side nor with a therapist linked remotely to supervise the sessions.
Aim: To study the safety, usability and patient acceptance of an autonomous telerehabilitation system for balance and gait (the REWIRE platform) in the patients home.
Design: Cohort study.
Setting: Community, in the stroke patients’ home.
Population: 15 participants with first-ever stroke, with a mild to moderate residual deficit of the lower extremities.
Method: Autonomous rehabilitation based on virtual rehabilitation was provided at the participants’ home for twelve weeks. The primary outcome was compliance (the ratio between days of actual and scheduled training), analysed with the two-tailed Wilcoxon Mann- Whitney test. Furthermore safety is defined by adverse events. The secondary endpoint was the acceptance of the system measured with the Technology Acceptance Model. Additionally, the cumulative duration of weekly training was analysed.
Results: During the study there were no adverse events related to the therapy. Patients performed on average 71% (range 39 to 92%) of the scheduled sessions. The Technology Acceptance Model Questionnaire showed excellent values for stroke patients after the training. The average training duration per week was 99 ±53min.
Conclusion: Autonomous telerehabilitation for balance and gait training with the REWIRE-system is safe, feasible and can help to intensive rehabilitative therapy at home.
Clinical Rehabilitation Impact: Telerehabilitation enables safe training in home environment and supports of the standard rehabilitation therapy.
Using a form of low-impulse electrical stimulation to the brain, documented by neuroimaging, researchers at the University of California San Diego School of Medicine, Veterans Affairs San Diego Healthcare System (VASDHS) and collaborators elsewhere, report significantly improved neural function in participants with mild traumatic brain injury (TBI).
Their findings are published online in the current issue of the journal Brain Injury.
TBI is a leading cause of sustained physical, cognitive, emotional and behavioral problems in both the civilian population (primarily due to motor vehicle accidents, sports, falls and assaults) and among military personnel (blast injuries). In the majority of cases, injury is deemed mild (75 percent of civilians, 89 percent of military), and typically resolves in days.
But in a significant percentage of cases, mild TBI and related post-concussive symptoms persist for months, even years, resulting in chronic, long-term cognitive and/or behavioral impairment.
Much about the pathology of mild TBI is not well understood, which the authors say has confounded efforts to develop optimal treatments. However, they note the use of passive neuro-feedback, which involves applying low-intensity pulses to the brain through transcranial electrical stimulation (LIP-tES), has shown promise.
In their pilot study, which involved six participants who had suffered mild TBI and experienced persistent post-concussion symptoms, the researchers used a version of LIP-tES called IASIS, combined with concurrent electroencephalography monitoring (EEG). The treatment effects of IASIS were assessed using magnetoencephalography (MEG) before and after treatment. MEG is a form of non-invasive functional imaging that directly measures brain neuronal electromagnetic activity, with high temporal resolution (1 ms) and high spatial accuracy (~3 mm at the cortex).
“Our previous publications have shown that MEG detection of abnormal brain slow-waves is one of the most sensitive biomarkers for mild traumatic brain injury (concussions), with about 85 percent sensitivity in detecting concussions and, essentially, no false-positives in normal patients,” said senior author Roland Lee, MD, professor of radiology and director of Neuroradiology, MRI and MEG at UC San Diego School of Medicine and VASDHS. “This makes it an ideal technique to monitor the effects of concussion treatments such as LIP-tES.”
The researchers found that the brains of all six participants displayed abnormal slow-waves in initial, baseline MEG scans. Following treatment using IASIS, MEG scans indicated measurably reduced abnormal slow-waves. The participants also reported a significant reduction in post-concussion scores.
“For the first time, we’ve been able to document with neuroimaging the effects of LIP-tES treatment on brain functioning in mild TBI,” said first author Ming-Xiong Huang, PhD, professor in the Department of Radiology at UC San Diego School of Medicine and a research scientist at VASDHS. “It’s a small study, which certainly must be expanded, but it suggests new potential for effectively speeding the healing process in mild traumatic braininjuries.”
New research suggests a little magnetic brain stimulation prior to being exposed to your greatest fear in a VR headset could help you “unlearn” your anxiety response(Credit: DanRoss/Depositphotos)
Advances in technology over the last decade have led to a swift rise in the volume of research surrounding transcranial magnetic stimulation (TMS) and its therapeutic effects. A team from the Würzburg University Hospital in Germany has just published a new study demonstrating how TMS, in conjunction with a virtual reality experience, can help alleviate anxiety disorders and essentially help people “unlearn” fears.
Transcranial magnetic stimulation works by directing a targeted magnetic field toward specific areas in the brain. Depending on the frequencies delivered this can either stimulate or inhibit the brain activity of the targeted area. Initially conceived as a research tool allowing scientists to understand exactly what roles certain areas of the brain play, TMS has more recently been explored as a potential new tool for treating an assortment of problems.
This new study looks at how the technology could improve a patient’s response when used in conjunction with a more traditional treatment method. Anxiety disorders are incredibly debilitating for many, from social phobias to more specific problems such as a fear of heights. Classically, the treatment for people with these disorders has been a type of cognitive behavioral therapy where one is exposed to the source of their anxiety under the supervision of a psychologist.
The team at the Würzburg University Hospital decided to examine whether this kind of classic therapy could be improved using TMS. Previous studies have shown that by targeting the frontal lobe with magnetic stimulation an anxiety response can be reduced, but the new research looks at how this could be incorporated into a specific treatment method for a targeted anxiety.
Thirty-nine subjects with an active fear of heights were split into two groups, including a control group which received fake TMS. The groups received 20 minutes of either real or fake TMS directed at the ventral medial prefrontal cortex, followed by virtual reality exposure to a dizzying height. After two sessions the group treated with the TMS prior to VR exposure exhibited reduced anxiety and avoidance symptoms compared to the control group that didn’t receive the TMS.
“The findings demonstrate that all participants benefit considerably from the therapy in virtual reality and the positive effects of the intervention are still clearly visible even after three months,” explains Professor Martin J. Herrmann, one of the researchers working on the study.
The researchers suggest that adding TMS and VR to an already well-proven treatment process increases the overall efficacy and essentially helps the brain “unlearn” its anxiety responses. The next phase for the study is to look at other forms of anxiety and see if the process is equally effective.
And the next fear that is being tackled? Arachnophobia.
Summary of: Stretton CM, Mudge S, Kayes NM, McPherson KM. Interventions to improve real-world walking after stroke: a systematic review and meta-analysis. Clin Rehabil. 2017;31:310-318.
Objective: To examine whether interventions that target walking in the real world are more effective than usual care or no intervention for improving actual walking behaviour in real-world settings in people with stroke.
Data sources: EBSCO Megafile, AMED, Scopus, Cochrane Database of Systemic Reviews, PEDro, OTseeker, and PsycBITE were searched from inception to November 2015. The database search was supplemented by hand searching.
Study selection: Randomised or quasi-randomised, controlled trials examining progressive task-oriented exercise interventions with or without behavioural change techniques. Studies had to have a usual care comparison group or a no-intervention/attention control group and measure the effects of the interventions on real-world walking (activity monitoring and/or self-report questionnaires).
Data extraction: Two reviewers extracted data. Methodological quality was assessed using the Cochrane Risk of Bias tool.
Data synthesis: Of the 4478 studies initially identified by the search, nine studies (10 treatment arms) with a total of 693 participants in the experimental group and 565 in the control group met the selection criteria and were included in the meta-analysis. Overall, the included studies were evaluated to have a low risk of bias. Based on the quantitative pooling of the available data from these trials, at post-intervention assessment there was a statistically significant difference in real-world walking in favour of the intervention group, by a standardised mean difference (SMD) of 0.29 (95% CI 0.17 to 0.41). Quantitative pooling of five studies with 3 to 6 month follow-up data found a SMD of 0.32 (95% CI 0.16 to 0.48) in favour of the intervention group. Pre-planned subgroup analysis found that interventions that incorporated at least one behaviour change technique were effective (SMD 0.27, 95% CI 0.12 to 0.43) whereas those without any behaviour change strategies were not effective (SMD –0.19, 95% CI –0.11 to 0.49).
Conclusion: Task-oriented exercise interventions alone appeared to be insufficient for improving real-world walking habits in people with stroke. Exercise and gait-oriented interventions that employed behaviour change techniques were more likely to be effective in changing real-world walking behaviour, but the estimated treatment effect was small.
Genetic generalized epilepsy (GGE) consists of several syndromes diagnosed and classified on the basis of clinical features and electroencephalographic (EEG) abnormalities. The main EEG feature of GGE is bilateral, synchronous, symmetric, and generalized spike-wave complex. Other classic EEG abnormalities are polyspikes, epileptiform K-complexes and sleep spindles, polyspike-wave discharges, occipital intermittent rhythmic delta activity, eye-closure sensitivity, fixation-off sensitivity, and photoparoxysmal response. However, admixed with typical changes, atypical epileptiform discharges are also commonly seen in GGE. There are circadian variations of generalized epileptiform discharges. Sleep, sleep deprivation, hyperventilation, intermittent photic stimulation, eye closure, and fixation-off are often used as activation techniques to increase the diagnostic yield of EEG recordings. Reflex seizure-related EEG abnormalities can be elicited by the use of triggers such as cognitive tasks and pattern stimulation during the EEG recording in selected patients. Distinct electrographic abnormalities to help classification can be identified among different electroclinical syndromes.
Genetic generalized epilepsy (GGE) encompasses several electroclinical syndromes diagnosed and classified according to clinical features and electroencephalographic (EEG) characteristics (1–3). The EEG hallmark of GGE is bilateral synchronous, symmetrical, and generalized spike-wave (GSW) discharges. Polyspikes and polyspike-wave discharges are also commonly seen in GGE. Fixation-off sensitivity (FOS), eye-closure sensitivity, photoparoxysmal response (PPR), epileptiform K-complexes/sleep spindles, and occipital intermittent rhythmic delta activity (OIRDA) are among the spectrum of abnormalities described in GGE (4).
In this review, we will be discussing the ictal and the interictal EEG abnormalities in GGE. We will also focus on the electrographic differences among different GGE syndromes, factors affecting the yield of EEG, and diagnostic pitfalls.[…]
The controlled environment of virtual reality is proving ideal for diagnosing and treating traumatic brain injuries. Learn why the Department of Defense is funding trials.
For some people, virtual reality is anything but a game. For some traumatic brain injury patients, it’s a means to living a normal life.
Meet Dr. Denise Krch, a research scientist and one of the leaders in virtual reality (VR) applications for sufferers of traumatic brain injuries (TBI). Krch has won grants from the National Institute on Disability, Independent Living, and Rehabilitation Research and from the Department of Defense (DOD) for her promising research, and works with the DOD to help impaired soldiers.
Krch doesn’t ask a patient to strap on an Oculus Rift or an HTC Vive. In fact, her VR doesn’t use headsets at all. That surround experience is called immersive VR. What Krch uses is non-immersive. Her VR is shown on a computer monitor and is more like a video game in which a player uses a joystick and mouse to manage real-world situations.
For TBI sufferers, distractions and the need to juggle multiple tasks can make the typical workplace impossible to navigate. They find their thoughts batted around by each new interruption, and are unable to focus on one task for long. It’s frustrating and frightening. Krch’s VR applications don’t transport patients to far-off worlds; they put patients in the middle of an office, one that grows more distracting as they progress.
Krch is based in East Hanover, N.J., at a division of the Kessler Foundation, a nonprofit that assists people with physical disabilities, and she is affiliated with Rutgers University’s New Jersey Medical School, but she owes her interest in VR to a guest from the West. Seven years back, Albert “Skip” Rizzo, the director for medical virtual reality at the University of Southern California’s Institute for Creative Technologies, visited the Kessler Foundation to share his research in VR as a treatment Krch was impressed with the role VR can play in rehabilitation and rebuilding cognitive functions that are difficult to improve.
She was so impressed, in fact, that she began working with Sebastian Koenig, then a post doctorate student and researcher in Rizzo’s lab, on a new trial.
Krch’s work deals with the cognitive area called executive function, which includes our ability to organize, plan, and shift attention from one task to another or keep two things in mind at once. Impairments in this area are difficult to measure in neuropsychological assessments. The first challenge Krch and Koenig tried to solve was measuring executive function performance. They wanted to use VR to determine which patients were having trouble multitasking or switching attention in real world situations.
From left to right: Sebastian Koenig, Ph.D., CEO of Katana Simulations; Albert “Skip” Rizzo, Ph.D., director of medical virtual reality, Institute for Creative Technologies; Denise Krch, Ph.D., research scientist in the Traumatic Brain Injury Laboratory at Kessler Foundation Research Center; Nancy Chiaravalloti, Ph.D., director of the Neuropsychology and Neuroscience Laboratory and Traumatic Brain Injury Laboratory at Kessler Foundation Research Center
To do that, they created software that put the test subjects in a virtual environment where they were challenged to perform tasks while distracted or where they were forced to shift focus. The researchers ran their tests with a healthy control population and with patients they suspected of having impairments. Some of these patients had TBI, while others had multiple sclerosis (MS).
The VR environment put subjects in an office where they were seated at a desk and charged to pay attention to different messages coming through their computer. Some messages were spam, which they had to learn to ignore. Other emails required a response. Many included real estate offers, and the subjects had to decide whether to accept offers or decline them.
Besides making financial decisions, subjects had to keep watch on an office projector. That projector wasn’t visible from where they were seated, but was in a nearby conference room. Told that the projector’s light was on the fritz, they needed to turn and check on it frequently while managing their other tasks.
“We found that indeed our patient populations were actually seemingly intact or normal on our traditional neuropsych measures, but they were performing in the very impaired range when we looked at them using VR,” Krch says.
To understand why this video-game-like experience is called VR, it’s necessary to understand “presence”—the feeling of how much believability an immersive situation offers. For a test to be effective, patients need to feel like they’re in a believable scenario. Krch and Koenig’s simulation proved to be extremely believable, stressing out TBI patients in no time with competing stimuli. While creating actual physical spaces could test the same functions, that isn’t practical, and bringing patients into stores or similar real-world locations can lead to safety issues. Using VR better helps researchers control the experience: They can precisely monitor stimuli and responses while generating clinical data.
“In a virtual environment, you have complete control over whether the environment was fairly sterile and limited in distraction. As they were able to build to tolerating more distraction, you could add. So really that’s the biggest advantage of having a virtual environment,” Krch explains.
That trial successfully tested for a variety of impairments in attention and executive function. It showed impairments in the ability “to remember to remember,” called prospective memory, in turning to check the projector. Responding to emails tested selective attention where the subject chooses to focus on one thing and not another. Determining whether or not to accept the real estate offers tested problem solving. TBI and MS patients who didn’t show problems on standard neuropsych tests showed problems across the board when using VR.
Testing showed subjects had the biggest problem with interruptions. The biggest stressor in the experience was a phone ringing in the background. Hearing an unanswered phone ring over and over really derailed people’s thoughts. Krch and Koenig used their data to come up with rehabilitation programs that also use VR, and then to write a grant proposal to fund new software that can improve problems with divided attention (multitasking) and set shifting (switching between tasks). Funding by the National Institute of Disability, Independent Living, and Rehabilitation Research led to 3 years of development work with clinicians and TBI patients, and the recent start of randomized clinical trials. Testing involves eight treatments conducted over 4 weeks. The 15 subjects are being tested before and after treatments to monitor progress. Krch began the trial the week before this interview, so she didn’t yet have data.
In this testing, Krch and Koenig’s VR office software has gotten an upgrade. Now the subject works at a corporation that makes a toy animal called the Wonderkin. While there’s plenty of usual office chores, such as sitting at a desk and making decisions about emails coming in, the toy animals add a little fun. One treatment module is set in a laboratory where subjects have to check whether or not toys are broken. Toy horses, goats, and pigs jump around, while subjects make sure they aren’t breaking. The idea in this and other modules is to create a game-like treatment where patients have fun while improving attention skills. As subjects improve, the difficulty rises.
Since this is a clinical treatment and not an evaluation tool, a clinician works with each subject to keep him or her on course. If the subject starts to feel overwhelmed by distractions, the clinician starts the patient on something simpler and helps the person build up.
Krch has a second grant-funded trial going, also continuing work started in Skip Rizzo’s USC–ICT lab. Funded through multiple Department of Defense studies, this treatment seeks to improve balance.
“The DOD has tremendous interest in finding treatments that help rehabilitate individuals with traumatic brain injuries,” Krch says. “As a matter of fact, now having been DOD funded and involved within the DOD system and learning about the DOD system, they actually fund a wide range of things from cerebral palsy research to cancer research to things that seemingly you wouldn’t think the military would care about. But the military serves not just the people who are serving directly, but their families.”
With many soldiers coming back from combat with concussions and TBIs, the DOD has funded a good deal of research in the area, especially studies that use tech simulations. Rizzo is also developing military-funded treatments for veterans, using VR to treat post-traumatic stress disorder.
Sebastian Koenig, Ph.D., and Denise Krch, Ph.D., with the VR simulation they use to help patients with traumatic brain injury overcome distractions typical in workplace environments
Krch’s randomized clinical trials on balance use VR software displayed on a large wall-mounted monitor. An infrared beam detects the subject’s body and the screen shows an avatar in a virtual environment. Testing with active duty personnel is done at Fort Belvoir Community Hospital in Fort Belvoir, Va. Krch’s VR balance treatments don’t currently use headsets, but that’s changing. Patients are more prone to fall when wearing a headset, and the immersive experience can lead to “simulator sickness,” which is becoming less of a problem as VR hardware improves. Krch’s team is adapting a VR program called the Fruit Toss to headsets. In it, fruits fly at the test subject, who has to either catch or kick them. Krch hopes to have it completed by the end of the year. Her team is currently collecting data and feedback on immersive VR tests, which they’ll use as pilot data for an immersive technology grant proposal.
“Our lives are full of distractions. You’ve got a kid in one hand and you’re closing the door. You’re helping a child with homework and you’re cooking, or you are on the phone with your health insurance company and you’re balancing your checkbook. The demands of our lives constantly require us to switch our attention a lot and to do more than one thing at a time,” Krch says. And that’s true in the office, as well: “Most job scenarios nowadays expect you to be able to multitask and to do it in the presence of many, many distractors.” With help from treatments like those Krch is creating, TBI patients are able to return to the workforce sooner and MS patients are able to stay at their jobs longer. That means a better social environment and better quality of life.
The virtual reality environment might still feel like a game, but the results of Krch’s work are a better payoff than any high score.
[This article appears in the September 2017 issue of Streaming Media Magazine as “Virtual Reality, Real Medical Carea.”]
In the space of a few short years, virtual reality has gone from being a technology of the future to part of the mainstream. Devices ranging from the humble Google Cardboard to the Oculus Rift have invaded our living rooms, and VR is set to transform everything from education to the sex industry.
But if VR is to achieve the mass appeal many are predicting then it needs to feel, as well as look, as real as possible, and not just like we’re passively watching a TV set strapped to our faces; the rest of our body needs to be as engaged as fully as our eyes.
Let’s get physical
Enter haptic technology, which allows us to literally feel what we’re experiencing in VR. You’ve likely come across haptic tech, sometimes referred to as just ‘haptics’, before, for example when you’ve played a video game and felt a rumble in the handset.
Now companies like and are bringing that experience to your whole body, with suits that can move and shake and vibrate in specific areas as you explore virtual worlds.
Design sketches for the Teslasuit, which enables the wearer to experience touch, heat and cold. Image credit: Teslasuit
But let’s slow down for a second. We have to because this haptic tech is far from becoming mainstream and, crucially, you can’t just put a haptic suit on someone and expect a VR experience to feel real.
That’s why there’s a lot of research going on into what’s known as ‘virtual embodiment’.
This is a complex and fairly new area of study, but it’s concerned with using technology, virtual representation, avatars, storytelling, haptics and all kinds of other subtle visual, auditory and sensory cues to make you feel like you’re inhabiting another body. Whether that’s an avatar of yourself, someone else or even something else.
Exploring the body/mind connection
Virtual embodiment might be a new area of study, but it’s built on research about the connection between our minds and our bodies that goes back more than a decade.
One example is what’s known as the ‘rubber hand illusion’. This was an experiment that essentially proved that, with the right stimuli, people took full ownership of a rubber hand as their own.
Fast-forward to the present day and similar studies have put the rubber hand illusion to the test in a VR setting.
In a 2010 study, researchers found that synchrony between touch, visual information and movement could induce a believable illusion that people actually had full ownership of a virtual arm.
To find out where the research is right now, we spoke to Dr Abraham Campbell, Chief Research Officer at MeetingRoom and Head of the VR Lab at University College Dublin.
“Virtual embodiment is a difficult thing to define as it can mean a lot to different people in different fields,” Campbell explained. He proposes that we look at virtual embodiment in three categories, all of which are a modified version of Tom Ziemke’s work on embodiment.
“Firstly, structural coupling is the most basic and classic definition of embodiment,” Campbell told us. “You’re connected to some form of structure. For example, a body. You move your limb in real life, and a virtual limb moves mimicking your actions…you are embodied within the VR world.”
Campbell offers the example of moving the HTC Vive controller in the real world, and that becoming a hand that’s moving in the virtual world.
Next up is historical embodiment, which is when the VR world you enter ‘remembers’ what’s happened in the past. Campbell uses the example of drawing on a white board, when what you’ve drawn stays there when you return in a day, a week or year from now.
“Finally, social embodiment is when you interact with real or artificial entities within the VR world,” Campbell says. “These interactions have to be behaviorally realistic, so you feel that your body is able to interact with them in the environment.”
And why is studying embodiment important? Campbell explains: “The more embodied the agent or human is within the environment, the more capable they are of interacting and sensing that world.”
Social interaction and education
Campbell’s main focus is on social collaboration in a recreational and educational setting.
“I’m examining the use of telepresence and VR in education, and exploring how I can remotely teach from Ireland to China using technology like Kinect [Microsoft’s motion-sensing input devices for its Xbox game consoles] to scan me in real time, while at the same time view the classroom in China using a set of projectors,” he said.
Bringing a teacher into a distant, virtual classroom will certainly be useful. But the next challenge is working on the intricacies of social interaction, such as facial expressions.
And although interacting with people may not sound like the most interesting use of virtual embodiment, it’s one that’s bound to get attention.
“It is also clearly an industry goal,” Dr Gary McKeown, Senior Lecturer at the School of Psychology at Queen’s University Belfast, tells us. “It is not a coincidence that the company with the most to gain from making the social interaction aspects of virtual reality function well – Facebook – is the one that bought Oculus.”
Imagine being able to remotely control machinery, or just help out family fullyfrom thousands of miles away – it would change so much about work, commuting and social interaction.
This is one area that’s particularly interesting to Campbell is using embodiment research to aid telepresence or telerobotics, which is the use of virtual reality or augmented reality (AR) to do just that.
“I’m fascinated by Remote Expert, which is being pioneered by DAQRI,” he told us. “This allows an expert in a field to be remotely placed in augmented reality beside a non-expert to perform a complex task. DAQRI are looking at medical and industry fields to apply this technology, but you can imagine lots more applications.”
Campbell explained that one of the many uses for this kind of tech could be if an oil pipeline bursts, and the engineer who designed it is in another country. A local engineer could go out to fix the pipeline , with the designer advising them in real time using VR or AR and a stereo 360-degree camera.
As we learned above, the tech enables the presence element of this. But where embodiment research comes in is making it more engaging, more realistic… ultimately more real, and with it the power to really offer help unhindered by technology.
Campbell explained: “The remote expert needs to be able to use hand gestures to demonstrate what the non-expert should do.
“The expert should be scanned in 3D in real time along with the remote world they are being placed into. This embodiment will allow them to truly be able to assist in whatever complex ask they’re asked to perform.”
The implications of this are massive, and could radically change a number of industries.
NASA already has a telerobotics research arm that’s looking at using this technology for space exploration, and it’s being introduced into other fields, from engineering to medicine.
Campbell believes this kind of telepresence will have a big impact on the medical industry as technology advances too.
“One solution I hope to explore in future is to use a full hologram projector pyramid,” he told us. “This approach has been suggested to me by medical professionals who want to meet patients remotely by using a full size projector pyramid [i.e. one that’s about two metres tall]. With this kind of tech, the doctor will be better able to diagnose a patient.”
Therapy and rehabilitation
Virtual embodiment doesn’t just have huge implications for exploring physical presence, but mental and emotional presence too.
In a 2016 study, researchers discovered that virtual avatars that look like our physical selves can help people feel a sense of embodiment and immersion that it’s believed could enable them to better work through mental health challenges, as well as real-world trauma.
VR software developer ProReal uses virtual environments containing avatars to play out scenarios that help people deal with a range of challenges, from bullying to PTSD and rehabilitation.
As the tech advances, it could provide a whole new area of therapy for those who aren’t getting the results they need from talking therapies or medication.
But it’s not just more serious mental health challenges, like PTSD, that can be explored; avatars can be used to increase confidence or change our perception of ourselves. Campbell told us about the time he noticed that those with bigger avatars felt more powerful.
“One accidental discovery I had, when I looked at games in VR, was that when the avatar is on average one foot taller than the other characters, it makes the player feel more powerful than the computer-controlled characters,” he explained.
That observation mirrors a in which researchers found that those given taller, bigger avatars behaved differently and more aggressively in interactions with others.
So aside from therapy and mental health use cases, it’s possible to imagine VR being used in corporate settings, to make people feel more confident before presenting to a boardroom.
The challenges of embodiment
If it’s easy for researchers to think of creative ways in which embodiment could have a positive impact on our lives, it’s not much of a leap to consider the negatives too.
Some tech commentators believe social isolation could be an issue as the use of VR headsets becomes more widespread and experiences become more immersive, a concern that’s likely to become more prevalent in gaming.
But many within the industry believe the focus on social isolation is just scaremongering.
“I haven’t witnessed people feeling isolation,” Campbell explained. “Even students who are interested in VR for pure escapism want to share it with others afterwards, and have become evangelists for VR in its ability to be an empathy machine, as with embodiment you can truly get a sense of seeing things from someone else’s perspective.”
Another important talking point is around dissociation or detachment from your own body after exploring virtual worlds. There’s been very little research in this area, but one study from 2006 found that VR can increase dissociative experiences, especially if you’ve been immersed in a virtual world for a long time.
More than a decade on, and with better VR technology and content it’s no surprise that lots of anecdotal evidence points to a similar ‘post virtual reality sadness’, in which the real world doesn’t quite compare.
One potentially problematic side-effect Campbell thinks we do need to consider right away is addiction. But he explains that, unlike with traditional gaming addiction, VR can be designed differently.
“In VR, the user needs to replicate the real-world actions and thus shortcuts the traditional dopamine-hit reward cycle that people often become addicted to,” he told us.
So, for example, if you win a gaming level within VR you’ve likely put a lot of physical effort and exertion in, perhaps by killing the big bad boss at the end of the level. You’re likely to be physically tired. That’s the difference.
Of course you can still get addicted to that feeling – people get addicted to working out – but Campbell tells us: “It’s the responsibility of game designers to make sure that VR games reward a player for real effort and not make a game that’s hard at first to complete, but then actually gets easier.”
We spoke to Sol Rogers, the Founder and CEO of creative agency and VR production studio , about some of these concerns.
“We’ve studied how humans interact in and with the real world for hundreds of years, and we’ll probably need the same amount of time to study how humans behave in VR and what the implications are,” he told us.
But he urged people to be excited about the prospect of what VR can do, not scared. “While we can only speculate about the impact VR will have, we need to progress with watchful caution rather than hysteria,” he added.
“Self-regulation from content creators is key, but a governing body also needs to take on some responsibility. Ultimately we need more research, and more time, to fully understand the implications.”
The recipe for greater embodiment
But embodiment is only convincing if everything else in the experience is up to scratch. We asked Rogers about how his team works with tech to create the most realistic experiences.
“Achieving a lifelike user experience in VR is now possible because of tremendous advancements in computer processing power, graphics, video and display technologies,” he told us.
“But the tech needs to stay out of the way; it needs to be entirely inconsequential to the experience, otherwise the spell is broken.”
And he adds that the tech is only half of the equation; his job is to ensure the content is telling the best possible story, every step of the way. “Content is also key to creating presence. While the tech is no doubt important, no user is going to suspend disbelief if the experience is awful.”
From entertainment and social interaction to engineering and performing medical procedures, the more we understand, test and implement embodiment experiments, the more we can engineer experiences to feel real – and in turn be more effective.
With advances in research from the likes of Campbell and his team, along with advances in tech to make headsets slimmer, sensory feedback easier to implement and full-body holograms a reality, the sky is only virtually the limit.
This article is brought to you in association with Vodafone.
This video explains the components of the Blood Brain Barrier (BBB), and the importance of the BBB to the brain’s health. It also explains the complications the BBB causes when treating illnesses of the brain.
What is the Blood Brain Barrier? The neurons in you brain require a very specialized environment to function. As the central processor for all of the body’s functions, any kind of contaminant or pathogen could be disastrous, so your brain needs to take extra precaution. The blood brain barrier is an extra layer of protection surrounding most of the blood vessels in your brain that keeps most toxins out. There are three main components of the blood brain barrier.
The first main component are endothelial cells, which line the walls of your blood vessels. Regular blood vessels also contain an endothelial cell lining, however, in regular blood vessels, there are spaces between those cells to allow particles in the bloodstream can pass through those spaces into surrounding tissue.
The endothelial cells in the blood brain barrier contain the second component, tight junctions, made up of proteins, which fill in the spaces between endothelial cells and block most substances from entering the brain.
The third component of the blood brain barrier are astrocytic end feet formed by astrocyte cells. The astrocytes provide nourishment to your neurons, and transport some of the substances that pass through the blood brain barrier such as glucose, to neurons.
So quick review: Endothelial cells with tight junctions block substances from passing through blood vessels and astrocytes provide nourishment to those cells. The blood brain barrier keeps out most toxins and bacteria, preventing potential harm to neurons. When the barrier breaks down, your brain is vulnerable to all sorts of threats. It can cause or speed up neurodegenerative diseases like multiple sclerosis. So what does the blood brain barrier let in? Generally, smaller molecules that are non polar, or don’t have a charge, are let in. Also substances like glucose and oxygen, which your brain requires to function are let in.
While the blood brain barrier is great at keeping out harmful substances, it is also great at keeping out beneficial substances like medicine. This makes infections or conditions like brain cancer so hard to treat, because the blood brain barrier is so selective and blocks out most medicine. However, scientists have been coming up with novel solutions to open the blood brain barrier to allow medicine into the brain. French scientists found out that sound waves can be used to break down the blood brain barrier. Other scientists have tried to use bubbles to force open the barrier and allow medicine through. While the blood brain barrier can be an obstacle in medicinal delivery it is a vital part of our survival and protects our brain from harm. Thanks for watching.
[Purpose] Homonymous hemianopia is one of the most common symptoms following neurologic damage leading to impairments of functional abilities and activities of daily living. There are two main types of restorative
rehabilitation in hemianopia: “border training” which involves exercising vision at the edge of the damaged visual field, and “blindsight training,” which is based on exercising the unconscious perceptual functions deep
inside the blind hemifield. Only border effects have been shown to be facilitated by transcranial direct current stimulation (tDCS). This pilot study represents the first attempt to associate the modulatory effects of tDCS over
the parieto-occipital cortex to blindsight treatment in the rehabilitation of the homonymous hemianopia.
[Subjects and Methods] Patients TA and MR both had chronic hemianopia. TA underwent blindsight treatment which was combined with tDCS followed by blindsight training alone. MR underwent the two training rounds in reverse order.
[Results] The patients showed better scores in clinical-instrumental, functional, and ecological assessments after tDCS combined with blindsight rehabilitation rather than rehabilitation alone. [Conclusion] In this two-case report parietal-occipital tDCS modulate the effects induced by blindsight treatment on hemianopia.
[Conclusion] In this two-case report parietal-occipital tDCS modulate the effects induced by blindsight treatment on hemianopia.
You’re bombarded with sensory information every day — sights, sounds, smells, touches and tastes. A constant barrage that your brain has to manage, deciding which information to trust or which sense to use as a backup when another fails. Understanding how the brain evaluates and juggles all this input could be the key to designing better therapies for patients recovering from stroke, nerve injuries, or other conditions. It could also help engineers build more realistic virtual experiences for everyone from gamers to fighter pilots to medical patients.
Now, some researchers are using virtual reality (VR) and even robots to learn how the brain pulls off this juggling act.
Do You Believe Your Eyes?
At the University of Reading in the U.K., psychologist Peter Scarfe and his team are currently exploring how the brain combines information from touch, vision, and proprioception – our sense of where our body is positioned – to form a clear idea of where objects are in space.
Generally, the brain goes with whichever sense is more reliable at the time. For instance, in a dark room, touch and proprioception trump vision. But when there’s plenty of light, you’re more likely to believe your eyes. Part of what Scarfe’s crew hopes to eventually unravel is how the brain combines information from both senses and whether that combination is more accurate than touch or sight alone. Does the brain trust input from one sense and ignore the other, does it split the difference between the two, or does it do something more complex?
To find out, the team is using a VR headset and a robot called Haptic Master.
While volunteers wear the VR headset, they see four virtual balls – three in a triangle formation and one in the center. They can also reach out and touch four real spheres that appear in the same place as the ones they see in VR: the three in the triangle formation are just plastic and never move, but the fourth is actually a ball bearing at the end of Haptic Master’s robot arm. Researchers use the robot to move this fourth ball between repetitions of the test. Think of the three-ball-triangle as a flat plane in space. The participant has to decide whether the fourth ball is higher or lower than the level of that triangle.
It’s a task that requires the brain to weigh and combine information from multiple senses to decide where the fourth ball is in relation to the other three. Participants get visual cues about the ball’s location through the VR headset, but they also use their haptic sense – the combination of touch and proprioception – to feel where the ball is in space.
The VR setup makes it easier to control the visual input and make sure volunteers aren’t using other cues, like the location of the robot arm or other objects in the room, to make their decisions.
Collectively, volunteers have performed this task hundreds of times. Adams and his colleagues are looking at how accurate the results are when the participant used only their eyes, only their haptic sense, or both senses at once. The team is then comparing those results to several computer models, each predicting how a person would estimate the ball’s position if their brain combined the sensory information in different ways.
So far, the team needs more data to learn which model best describes how the brain combines sensory cues. But they say that their results, and those of others working in the field, could one day help design more accurate haptic feedback, which could make interacting with objects in virtual reality feel more realistic.
On Shaky Footing
Anat Lubetzky, a physical therapy researcher at New York University, is also turning to VR. She uses the burgeoning technology to study how our brains weigh different sensory input to help us when things get shaky — specifically, if people rely on their sense of proprioception or their vision to keep their balance.
Conventional wisdom in sports medicine says that standing on an uneven surface is a good proprioception workout for patients in rehabilitation after an injury. That’s because it forces your somatosensory system, the nerves involved in proprioception, to work harder. So if your balance is suffering because of nerve damage, trying to stabilize yourself while standing on an uneven surface, like a bosu ball, should help.
But Lubetzky’s results tell a different story.
In the lab, Lubetzky’s subjects strap on VR headsets and stand on either a solid floor or an unsteady surface, like a wobble board. She projects some very subtly moving dots onto the VR display and uses a pressure pad on the floor to measure how participants’ bodies sway.
It turns out, when people stand on an unstable surface, they’re more likely to sway in time with the moving dots. But on a stable surface, they seem to pay less attention to the dots.
So rather than working their somatosensory systems harder, it seems people use their vision to look for a fixed reference point to help keep them balanced. In other words, the brain switches from a less reliable sense to a more reliable one, a process called sensory weighting.
Ultimately, Lubetzky hopes her VR setup could help measure how much a patient with somatosensory system damage relies on their vision. This knowledge, in turn, could help measure the severity of the problem so doctors can design a better treatment plan.
As VR gets more realistic and more immersive – partly thanks to experiments like these – it could offer researchers an even more refined tool for picking apart what’s going on in the brain.
Says Lubetzky, “It’s been a pretty amazing revolution.”