Posts Tagged avatar
[Abstract] A brain–computer interface based stroke rehabilitation system, controlling an avatar and functional electrical stimulation, to improve motor functions
Brain–computer interfaces (BCI) can detect the neuronal activity of patients’ motor intention to control external devices. With the feedback from the device, the neuronal network in the brain to reorganizes due to neuroplasticity.
Material and method
The BCI controls an avatar and functional electrical stimulation (FES) to provide the feedback. The expected task for the patient is to imagine either left or right wrist dorsiflexion according to the instructions. The training was designed to have 25 sessions (240 trials of either left or right motor imagery) of BCI feedback sessions over 13 weeks. Two days before and two days after we did clinical measures to observe motor improvement. The primary measure was upper extremity Fugl–Meyer assessment (UE-FMA), which evaluates the motor impairment. Four secondary measures were also performed to exam the spasm (modified Ashworth scale, MAS), tremor (Fahn tremor rating scale, FTRS), level of daily activity (Barthel index, BI), and finger dexterity (9-hole peg test, 9HPT).
One male stroke patient (53 years old, 11 months since stroke, and right upper limb paralyzed) participated in the training. He quickly learned to use the BCI and the maximal classification accuracy was over 90% after the 5th session. The UE-FMA increased from 25 to 46 points. The BI increased from 90 to 95 points. MAS and FTRS decreased from 2 to 1 and from 4 to 3 points respectively. Although he could not conduct the 9HPT until 18th training session, he was able to complete the test from 19th session in 10 min 22 s and the time was reduced to 2 min 53 s after 25th session.
The patient could be more independent in his daily activity, he had less spasticity and tremor. Also, the 9HPT was possible to do, which was not before. The system is currently validated with a study of 50 patients.
[ARTICLE] BCI-Based Strategies on Stroke Rehabilitation with Avatar and FES Feedback – Full Text PDF
Stroke is the leading cause of serious and long-term disability worldwide. Some studies have shown that motor imagery (MI) based BCI has a positive effect in poststroke rehabilitation. It could help patients promote the reorganization processes in the damaged brain regions. However, offline motor imagery and conventional online motor imagery with feedback (such as rewarding sounds and movements of an avatar) could not reflect the true intention of the patients. In this study, both virtual limbs and functional electrical stimulation (FES) were used as feedback to provide patients a closed-loop sensorimotor integration for motor rehabilitation. The FES system would activate if the user was imagining hand movement of instructed side. Ten stroke patients (7 male, aged 22-70 years, mean 49.5+-15.1) were involved in this study. All of them participated in BCI-FES rehabilitation training for 4 weeks.The average motor imagery accuracies of the ten patients in the last week were 71.3%, which has improved 3% than that in the first week. Five patients’ Fugl-Meyer Assessment (FMA) scores have been raised. Patient 6, who has have suffered from stroke over two years, achieved the greatest improvement after rehabilitation training (pre FMA: 20, post FMA: 35). In the aspect of brain patterns, the active patterns of the five patients gradually became centralized and shifted to sensorimotor areas (channel C3 and C4) and premotor area (channel FC3 and FC4).In this study, motor imagery based BCI and FES system were combined to provided stoke patients with a closed-loop sensorimotor integration for motor rehabilitation. Result showed evidences that the BCI-FES system is effective in restoring upper extremities motor function in stroke. In future work, more cases are needed to demonstrate its superiority over conventional therapy and explore the potential role of MI in poststroke rehabilitation.
Download: PDF only
Virtual reality to fight opioids? Yes, researcher says
NASHVILLE — Video games were once Noah Robinson’s only way to cope.
When he couldn’t bear the challenges of growing up as an outsider, he fell into immersive worlds that eased his tensions and helped him feel less alone.
Now, as a graduate student at Vanderbilt University, Robinson is applying the same premise to an unconventional, high-tech therapy that might help addicts get a firmer grip on recovery.
By immersing them in a virtual world of swirling colors and abstract shapes, and then layering psychological principles over that experience, Robinson hopes to help patients separate themselves from the negative emotions and cravings that fuel addiction.
If he succeeds, his mentors believe he could be at the forefront of a groundbreaking new treatment for addicts, one that could prove to be especially significant as the nation battles the deadly opioid crisis.
“The only thing I know for sure is that most of the stuff that we’ve been doing thus far to get our arms around this crisis has not been working,” said Brian Wind, chief of clinical operations at a Murfreesboro, Tenn., location of JourneyPure, an inpatient rehabilitation center where Robinson tests his virtual reality therapy.
“We’ve got to get more proactive, and I believe that trying to find new and innovative solutions that may be of benefit to people is the way to go,” Wind said. “This seems to be just that.”
Goggles and joysticks: Tools to fight addiction?
The treatment itself can seem strange to the uninitiated — strange enough that mentors initially warned Robinson not to mention his interest in virtual reality in his application to Vanderbilt.
Patients strap bulky goggles over their heads and grab onto two joysticks.
From the outside, they look like a mash-up of a cross country skier and a hardcore gamer.
But the screens on the inside of the goggles transport them, and everywhere they look reveals a new corner of a bright and surreal landscape of sunbursts and technicolor swirls.
A headset allows them to communicate with their therapist, who appears in this world in the form of a cartoon avatar.
Different “rooms” in this virtual reality serve different purposes. A therapist might walk a patient through talk therapy in one, while another one designed like a bar gives recovering alcoholics the chance to practice turning down a drink in a low-pressure setting.
Robinson is quick to make one thing clear: This virtual reality, which he calls VR, is different from what people have experienced on their smartphones. It truly floods your senses, and almost completely separates you from your actual surroundings.
His theory is that the distraction of the virtual reality also will separate people from their anxieties and fears, making it easier for them to absorb messages from therapy.
‘Almost innovative beyond its time’
That thinking was driven by his teenage years, when he used role-playing video games like “RuneScape” and online forums to escape the anxiety of realizing he is gay.
As he grew up and came out, he no longer craved the escape. But the impact of technology on his life lingered in the back of his mind.
While working as a research fellow at the National Institutes of Health in 2014, he began experimenting with virtual reality.
He bought virtual reality equipment of his own and quickly realized he could pair the same kinds of technologies he once enjoyed with innovative therapy to provide a healthy way to confront tough issues.
“I was just escaping, but what I saw with the VR is that its power could be used for a therapeutic purpose, not just escape,” said Robinson, who is now 26. “I realized the potential.”
Robinson was convinced the idea had legs. And when he applied to get his Ph.D. in clinical psychology at Vanderbilt, he wanted to weave it into his work.
In 2017, a campus innovation hub known as the Wond’ry gave him space and funding to buy more equipment and pursue his goal.
“It is a big, hairy, audacious goal that he’s trying to achieve,” said Robert Grajewski, executive director of the Wond’ry. “It’s almost innovative beyond its time.”
When VR appears in therapy, ‘smiles start to emerge’
Robinson wasn’t initially sure how and where to apply the technology. It had barely even been discussed as a tool in psychology.
He came to JourneyPure for his work as a clinical psychology student and started testing the VR during sessions in 2017. Then something clicked.
“When I saw that patient who had so much pain put on the VR and start smiling and laughing, I felt chills and thought, ‘This is it,’” he said.
Reflecting on about 60 patients who have used it since, Wind was similarly optimistic.
“It’s rewarding to observe it when from underneath the big bulky mass you see smiles start to emerge,” he said. “They come out on the other side with an increase in positive emotions and a decrease in negative emotions.”
That’s a combination that can help prevent relapse, Wind said.
Now Robinson is committed to testing virtual reality in a scientific study that will attempt to quantify their anecdotal observations. Robinson and nine undergraduate Vanderbilt students with the university’s Hollon Research Group are working on the project, which should continue into the summer.
The hope is that patients will eventually be able to take VR equipment home, where they could have instant access to help when they need it.
As the project has moved further along, Robinson noticed a shift toward acceptance for his unorthodox idea, perhaps driven by the urgent need to find new treatments for opioid addicts.
When he presented information on the project at Harvard University earlier this month, people peppered him with questions.
“It’s a new direction. People are very excited about it,” he said. “Because it’s unusual, I guess.”
Robinson’s ambitions for the technology and its applications seem boundless — he plans to devote his career as a psychologist to refining its use.
“It feels like a calling, honestly.”
Two inertial measurement units (IMU) are stuck to my wrists and forearms, tracking the orientation of my arms, while the EMG monitors my electrical impulses and peripheral nerve activity.
Dr. Sook-Lei Liew, Director of USC’s Neural Plasticity and Neurorehabilitation Laboratory, and Julia Anglin, Research Lab Supervisor and Technician, wait to record my baseline activity and observe a monitor with a representation of my real arm and a virtual limb. I see the same image from inside the Rift.
“Ready?” asks Dr. Liew. “Don’t move—or think.”
I stay still, close my eyes, and let my mind go blank. Anglin records my baseline activity, allowing the brain-machine interface to take signals from the EEG and EMG, alongside the IMU, and use that data to inform an algorithm that drives the virtual avatar hand.
“Now just think about moving your arm to the avatar’s position,” says Dr. Liew.
I don’t move a muscle, but think about movement while looking at the two arms on the screen. Suddenly, my virtual arm moves toward the avatar appendage inside the VR world.
Something happened just because I thought about it! I’ve read tons of data on how this works, even seen other people do it, especially inside gaming environments, but it’s something else to experience it for yourself.
“Very weird isn’t it?” says David Karchem, one of Dr. Liew’s trial patients. Karchem suffered a stroke while driving his car eight years ago, and has shown remarkable recovery using her system.
“My stroke came out of the blue and it was terrifying, because I suddenly couldn’t function. I managed to get my car through an intersection and call the paramedics. I don’t know how,” Karchem says.
He gets around with a walking stick today, and has relatively normal function on the right side of his body. However, his left side is clearly damaged from the stroke. While talking, he unwraps surgical bandages and a splint from his left hand, crooked into his chest, to show Dr. Liew the progress since his last VR session.
As a former software engineer, Karchem isn’t fazed by using advanced technology to aid the clinical process. “I quickly learned, in fact, that the more intellectual and physical stimulation you get, the faster you can recover, as the brain starts to fire. I’m something of a lab rat now and I love it,” he says.
Karchem is participating in Dr. Liew’s REINVENT (Rehabilitation Environment using the Integration of Neuromuscular-based Virtual Enhancements for Neural Training) project, funded by the American Heart Association, under a National Innovative Research Grant. It’s designed to help patients who have suffered strokes reconnect their brains to their bodies.
“My PhD in Occupational Science, with a concentration in Cognitive Neuroscience, focused on how experience changes brain networks,” explains Dr. Liew. “I continued this work as a Postdoctoral Fellow at the National Institute of Neurological Disorders and Stroke at the National Institutes of Health, before joining USC, in my current role, in 2015.
“Our main goal here is to enhance neural plasticity or neural recovery in individuals using noninvasive brain stimulation, brain-computer interfaces and novel learning paradigms to improve patients’ quality of life and engagement in meaningful activities,” she says.
Here’s the science bit: the human putative mirror neuron system (MNS) is a key motor network in the brain that is active both when you perform an action, like moving your arm, and when you simply watch someone else—like a virtual avatar—perform that same action. Dr. Liew hypothesizes that, for stroke patients who can’t move their arm, simply watching a virtual avatar that moves in response to their brain commands will activate the MNS and retrain damaged or neighboring motor regions of the brain to take over the role of motor performance. This should lead to improved motor function.
“In previous occupational therapy sessions, we found many people with severe strokes got frustrated because they didn’t know if they were activating the right neural networks when we asked them to ‘think about moving’ while we physically helped them to do so,” Dr. Liew says. “If they can’t move at all, even if the right neurological signals are happening, they have no biological feedback to reinforce the learning and help them continue the physical therapy to recover.”
For many people, the knowledge that there’s “intent before movement”—in that the brain has to “think” about moving before the body will do so, is news. We also contain a “body map” inside our heads that predicts our spacetime presence in the world (so we don’t bash into things all the time and know when something is wrong). Both of these brain-body elements face massive disruption after a stroke. The brain literally doesn’t know how to help the body move.
What Dr. Liew’s VR platform has done is show patients how this causal link works and aid speedier, and less frustrating, recovery in real life.
From the Conference Hall to the Lab
She got the idea while geeking out in Northern California one day.
“I went to the Experiential Technology Conference in San Francisco in 2015, and saw demos of intersections of neuroscience and technology, including EEG-based experiments, wearables, and so on. I could see the potential to help our clinical population by building a sensory-visual motor contingency between your own body and an avatar that you’re told is ‘you,’ which provides rewarding sensory feedback to reestablish brain-body signals.
“Inside VR you start to map the two together, it’s astonishing. It becomes an automatic process. We have seen that people who have had a stroke are able to ’embody’ an avatar that does move, even though their own body, right now, cannot,” she says.
Dr. Liew’s system is somewhat hacked together, in the best possible Maker Movement style; she built what didn’t exist and modified what did to her requirements.
“We wanted to keep costs low and build a working device that patients could actually afford to buy. We use Oculus for the [head-mounted display]. Then, while most EEG systems are $10,000 or more, we used an OpenBCI system to build our own, with EMG, for under $1,000.
“We needed an EEG cap, but most EEG manufacturers wanted to charge us $200 or more. So, we decided to hack the rest of the system together, ordering a swim cap from Amazon, taking a mallet and bashing holes in it to match up where the 12 positions on the head electrodes needed to be placed (within the 10-10 international EEG system). We also 3D print the EEG clips and IMU holders here at the lab.
“For the EMG, we use off-the-shelf disposable sensors. This allows us to track the electromyography, if they do have trace muscular activity. In terms of the software platform, we coded custom elements in C#, from Microsoft, and implemented them in the Unity3D game engine.”
Dr. Liew is very keen to bridge the gap between academia and the tech industry; she just submitted a new academic paper with the latest successful trial results from her work for publication. Last year, she spoke at SXSW 2017 about how VR affects the brain, and debuted REINVENT at the conference’s VR Film Festival. It received a “Special Jury Recognition for Innovative Use of Virtual Reality in the Field of Health.”
Going forward, Dr. Liew would like to bring her research to a wider audience.
- ‘Philip K. Dick’s Electric Dreams’ Asks if VR Rewires Your Brain‘Philip K. Dick’s Electric Dreams’ Asks if VR Rewires Your Brain
“I feel the future of brain-computer interfaces splits into adaptive, as with implanted electrodes, and rehabilitative, which is what we work on. What we hope to do with REINVENT is allow patients to use our system to re-train their neural pathways, [so they] eventually won’t need it, as they’ll have recovered.
“We’re talking now about a commercial spin-off potential. We’re able to license the technology right now, but, as researchers, our focus, for the moment, is in furthering this field and delivering more trial results in published peer-reviewed papers. Once we have enough data we can use machine learning to tailor the system precisely for each patient and share our results around the world.”
If you’re in L.A., Dr. Liew and her team will be participating in the Creating Reality VR Hackathon from March 12-15 at USC. Details here.
Could virtual reality help stroke survivors regain motor function?
That’s a question Sook-Lei Liew is looking to answer.
Liew, an assistant professor at the University of Southern California and an affiliate of the Stevens Neuroimaging and Informatics Institute at the Keck School of Medicine, was inspired by research from Mel Slater and Jeremy Bailenson on embodiment in VR. If someone’s given a child’s body in VR, for example, they might start exhibiting more childlike behavior.
She wondered if giving stroke survivors with motor impairments a virtual avatar that moves properly could help promote brain plasticity (or the ability to change) and recovery. Maybe it would eventually lead to them to moving an impaired limb again.
“So, kind of like tricking the brain through visual input,” said Liew, who is also director of the Neural Plasticity and Neurorehabilitation Laboratory. “There’s a lot of emerging evidence from neuroscience and psychology that was showing that you can really identify [with the avatar], and it changes your behavior based on the avatar you’re given in VR.”
Virtual reality is a computer-generated simulation of a 3D environment. Using a VR headset with lenses that feed images to the eyes, a person can be virtually transported to another location, or interact with a setting in a seemingly realistic way. It’s commonly been used in gaming, but it’s being tested in other environments, too — like rehab.
Implementing VR in health care and patient treatment isn’t new. It’s been used to help people overcome phobias and anxiety disorders. But the application is starting to take off now that the technology is more developed and commercially available. Some medical schools are looking to train students with virtual simulations, and it’s even helping midwives learn how to deliver babies.
Liew’s research team has been working on a study for about two years called REINVENT, an acronym for Rehabilitation Environment using the Integration of Neuromuscular-based Virtual Enhancements for Neural Training. The researchers also collaborated with the USC Institute for Creative Technologies to develop the prototype.
The process works by using a brain-computer interface, which takes a signal from the brain and uses it to control another device: a computer, a robot or, in REINVENT’s case, an avatar in VR.
Next, researchers read electrical signatures of brain activity from the surface of the scalp using electroencephalography, or EEG, for short. The team also uses electromyography, which studies the electrical activity of the muscles. That can tell them whether somebody’s moving or if they’re trying to move.
Those signals are then fed into a program on a laptop. The program has thresholds so that when specific signals in the brain or muscle activity that correspond to an attempt to move are detected, they drive the movement of a virtual arm. The resulting visual feedback through a VR headset could help strengthen neural pathways from the damaged motor cortex to the impaired arm or limb.
While the researchers could theoretically extend this process to a patient’s lower limbs, Liew said it can be dangerous for someone with a motor impairment in the lower extremities to try to move with VR, so seated studies are much safer.
The research group recently finished testing the prototype using an Oculus DK2 with 22 healthy older adults, who provided a sample of what the brain and muscle signals look like when they move. They’re now starting to test with stroke patients in a controlled lab setting, aiming to work with 10 in the short term and hundreds in the long term, in both clinical and home environments.
The team also found that giving people neurofeedback of the virtual arm moving in a VR headset was more effective than simply showing it on a screen.
“Their brain activity in the motor regions that we’re trying to target is higher, and they’re able to control the brain-computer interface a little bit better and faster,” Liew said. “It makes the case that there is an added benefit from doing this in virtual reality, which is one of the first things we wanted to know.”
An unclear future
Because VR is still a relatively new technology, there are many unanswered questions on the best ways to use it in the medical profession.
“For the most part, nobody knows how to make great VR experiences, for business or consumer,” Gartner analyst Brian Blau said. “Over time, those issues will get resolved. But for the medical industry, they have the extra added bonus of having even more types of physical behaviors that they have to either mimic or want to measure.”
And while the possibilities for VR in health care are exciting, Liew is careful not to get ahead of herself.
“We think that VR is a promising medium, but we’re moving ahead cautiously,” she said. “A lot of the work that we’re trying to do is to test assumptions, because there’s a lot of excitement about VR, but there’s not that much that’s scientifically known.”
Only time — and plenty of research — will tell.
Tech Enabled: CNET chronicles tech’s role in providing new kinds of accessibility.
The Smartest Stuff: Innovators are thinking up new ways to make you, and the things around you, smarter.
[ARTICLE] Hemiparetic Stroke Rehabilitation Using Avatar and Electrical Stimulation Based on Non-invasive Brain Computer Interface – Full Text
Brain computer interfaces (BCIs) have been employed in rehabilitation training for post-stroke patients. Patients in the chronic stage, and/or with severe paresis, are particularly challenging for conventional rehabilitation. We present results from two such patients who participated in BCI training with first-person avatar feedback. Five assessments were conducted to assess any behavioural changes after the intervention, including the upper extremity Fugl-Meyer assessment (UE-FMA) and 9 hole-peg test (9HPT). Patient 1 (P1) increased his UE-FMA score from 25 to 46 points after the intervention. He could not perform the 9HPT in the first session. After the 18th session, he was able to perform the 9HPT and reduced the time from 10 min 22 sec to 2 min 53 sec. Patient 2 (P2) increased her UE-FMA from 17 to 28 points after the intervention. She could not perform the 9HPT throughout the training session. However, she managed to complete the test in 17 min 17 sec during the post-assessment session.
These results show that the feasibility of this BCI approach with chronic patients with severe paresis, and further support the growing consensus that these types of tools might develop into a new paradigm for rehabilitation tool for stroke patients. However, the results are from only two chronic stroke patients. This approach should be furthe validated in broader randomized controlled studies involving more patients.
In the space of a few short years, virtual reality has gone from being a technology of the future to part of the mainstream. Devices ranging from the humble Google Cardboard to the Oculus Rift have invaded our living rooms, and VR is set to transform everything from education to the sex industry.
But if VR is to achieve the mass appeal many are predicting then it needs to feel, as well as look, as real as possible, and not just like we’re passively watching a TV set strapped to our faces; the rest of our body needs to be as engaged as fully as our eyes.
Let’s get physical
Enter haptic technology, which allows us to literally feel what we’re experiencing in VR. You’ve likely come across haptic tech, sometimes referred to as just ‘haptics’, before, for example when you’ve played a video game and felt a rumble in the handset.
But let’s slow down for a second. We have to because this haptic tech is far from becoming mainstream and, crucially, you can’t just put a haptic suit on someone and expect a VR experience to feel real.
That’s why there’s a lot of research going on into what’s known as ‘virtual embodiment’.
This is a complex and fairly new area of study, but it’s concerned with using technology, virtual representation, avatars, storytelling, haptics and all kinds of other subtle visual, auditory and sensory cues to make you feel like you’re inhabiting another body. Whether that’s an avatar of yourself, someone else or even something else.
Exploring the body/mind connection
Virtual embodiment might be a new area of study, but it’s built on research about the connection between our minds and our bodies that goes back more than a decade.
One example is what’s known as the ‘rubber hand illusion’. This was an experiment that essentially proved that, with the right stimuli, people took full ownership of a rubber hand as their own.
Fast-forward to the present day and similar studies have put the rubber hand illusion to the test in a VR setting.
In a 2010 study, researchers found that synchrony between touch, visual information and movement could induce a believable illusion that people actually had full ownership of a virtual arm.
Similar studies have looked at the efficacy of using avatars for rehabilitation and visual therapy, with research suggesting that, in most cases, our virtual bodies can feel as real as our physical bodies.
Defining virtual embodiment
To find out where the research is right now, we spoke to Dr Abraham Campbell, Chief Research Officer at MeetingRoom and Head of the VR Lab at University College Dublin.
“Virtual embodiment is a difficult thing to define as it can mean a lot to different people in different fields,” Campbell explained. He proposes that we look at virtual embodiment in three categories, all of which are a modified version of Tom Ziemke’s work on embodiment.
“Firstly, structural coupling is the most basic and classic definition of embodiment,” Campbell told us. “You’re connected to some form of structure. For example, a body. You move your limb in real life, and a virtual limb moves mimicking your actions…you are embodied within the VR world.”
Campbell offers the example of moving the HTC Vive controller in the real world, and that becoming a hand that’s moving in the virtual world.
Next up is historical embodiment, which is when the VR world you enter ‘remembers’ what’s happened in the past. Campbell uses the example of drawing on a white board, when what you’ve drawn stays there when you return in a day, a week or year from now.
“Finally, social embodiment is when you interact with real or artificial entities within the VR world,” Campbell says. “These interactions have to be behaviorally realistic, so you feel that your body is able to interact with them in the environment.”
And why is studying embodiment important? Campbell explains: “The more embodied the agent or human is within the environment, the more capable they are of interacting and sensing that world.”
Social interaction and education
Campbell’s main focus is on social collaboration in a recreational and educational setting.
“I’m examining the use of telepresence and VR in education, and exploring how I can remotely teach from Ireland to China using technology like Kinect [Microsoft’s motion-sensing input devices for its Xbox game consoles] to scan me in real time, while at the same time view the classroom in China using a set of projectors,” he said.
Bringing a teacher into a distant, virtual classroom will certainly be useful. But the next challenge is working on the intricacies of social interaction, such as facial expressions.
And although interacting with people may not sound like the most interesting use of virtual embodiment, it’s one that’s bound to get attention.
“It is also clearly an industry goal,” Dr Gary McKeown, Senior Lecturer at the School of Psychology at Queen’s University Belfast, tells us. “It is not a coincidence that the company with the most to gain from making the social interaction aspects of virtual reality function well – Facebook – is the one that bought Oculus.”
Imagine being able to remotely control machinery, or just help out family fullyfrom thousands of miles away – it would change so much about work, commuting and social interaction.
This is one area that’s particularly interesting to Campbell is using embodiment research to aid telepresence or telerobotics, which is the use of virtual reality or augmented reality (AR) to do just that.
“I’m fascinated by Remote Expert, which is being pioneered by DAQRI,” he told us. “This allows an expert in a field to be remotely placed in augmented reality beside a non-expert to perform a complex task. DAQRI are looking at medical and industry fields to apply this technology, but you can imagine lots more applications.”
Campbell explained that one of the many uses for this kind of tech could be if an oil pipeline bursts, and the engineer who designed it is in another country. A local engineer could go out to fix the pipeline , with the designer advising them in real time using VR or AR and a stereo 360-degree camera.
As we learned above, the tech enables the presence element of this. But where embodiment research comes in is making it more engaging, more realistic… ultimately more real, and with it the power to really offer help unhindered by technology.
Campbell explained: “The remote expert needs to be able to use hand gestures to demonstrate what the non-expert should do.
“The expert should be scanned in 3D in real time along with the remote world they are being placed into. This embodiment will allow them to truly be able to assist in whatever complex ask they’re asked to perform.”
The implications of this are massive, and could radically change a number of industries.
NASA already has a telerobotics research arm that’s looking at using this technology for space exploration, and it’s being introduced into other fields, from engineering to medicine.
Campbell believes this kind of telepresence will have a big impact on the medical industry as technology advances too.
“One solution I hope to explore in future is to use a full hologram projector pyramid,” he told us. “This approach has been suggested to me by medical professionals who want to meet patients remotely by using a full size projector pyramid [i.e. one that’s about two metres tall]. With this kind of tech, the doctor will be better able to diagnose a patient.”
Therapy and rehabilitation
Virtual embodiment doesn’t just have huge implications for exploring physical presence, but mental and emotional presence too.
In a 2016 study, researchers discovered that virtual avatars that look like our physical selves can help people feel a sense of embodiment and immersion that it’s believed could enable them to better work through mental health challenges, as well as real-world trauma.
VR software developer ProReal uses virtual environments containing avatars to play out scenarios that help people deal with a range of challenges, from bullying to PTSD and rehabilitation.
As the tech advances, it could provide a whole new area of therapy for those who aren’t getting the results they need from talking therapies or medication.
But it’s not just more serious mental health challenges, like PTSD, that can be explored; avatars can be used to increase confidence or change our perception of ourselves. Campbell told us about the time he noticed that those with bigger avatars felt more powerful.
“One accidental discovery I had, when I looked at games in VR, was that when the avatar is on average one foot taller than the other characters, it makes the player feel more powerful than the computer-controlled characters,” he explained.
So aside from therapy and mental health use cases, it’s possible to imagine VR being used in corporate settings, to make people feel more confident before presenting to a boardroom.
The challenges of embodiment
If it’s easy for researchers to think of creative ways in which embodiment could have a positive impact on our lives, it’s not much of a leap to consider the negatives too.
Some tech commentators believe social isolation could be an issue as the use of VR headsets becomes more widespread and experiences become more immersive, a concern that’s likely to become more prevalent in gaming.
But many within the industry believe the focus on social isolation is just scaremongering.
“I haven’t witnessed people feeling isolation,” Campbell explained. “Even students who are interested in VR for pure escapism want to share it with others afterwards, and have become evangelists for VR in its ability to be an empathy machine, as with embodiment you can truly get a sense of seeing things from someone else’s perspective.”
Another important talking point is around dissociation or detachment from your own body after exploring virtual worlds. There’s been very little research in this area, but one study from 2006 found that VR can increase dissociative experiences, especially if you’ve been immersed in a virtual world for a long time.
More than a decade on, and with better VR technology and content it’s no surprise that lots of anecdotal evidence points to a similar ‘post virtual reality sadness’, in which the real world doesn’t quite compare.
One potentially problematic side-effect Campbell thinks we do need to consider right away is addiction. But he explains that, unlike with traditional gaming addiction, VR can be designed differently.
“In VR, the user needs to replicate the real-world actions and thus shortcuts the traditional dopamine-hit reward cycle that people often become addicted to,” he told us.
So, for example, if you win a gaming level within VR you’ve likely put a lot of physical effort and exertion in, perhaps by killing the big bad boss at the end of the level. You’re likely to be physically tired. That’s the difference.
Of course you can still get addicted to that feeling – people get addicted to working out – but Campbell tells us: “It’s the responsibility of game designers to make sure that VR games reward a player for real effort and not make a game that’s hard at first to complete, but then actually gets easier.”
“We’ve studied how humans interact in and with the real world for hundreds of years, and we’ll probably need the same amount of time to study how humans behave in VR and what the implications are,” he told us.
But he urged people to be excited about the prospect of what VR can do, not scared. “While we can only speculate about the impact VR will have, we need to progress with watchful caution rather than hysteria,” he added.
“Self-regulation from content creators is key, but a governing body also needs to take on some responsibility. Ultimately we need more research, and more time, to fully understand the implications.”
The recipe for greater embodiment
But embodiment is only convincing if everything else in the experience is up to scratch. We asked Rogers about how his team works with tech to create the most realistic experiences.
“Achieving a lifelike user experience in VR is now possible because of tremendous advancements in computer processing power, graphics, video and display technologies,” he told us.
“But the tech needs to stay out of the way; it needs to be entirely inconsequential to the experience, otherwise the spell is broken.”
And he adds that the tech is only half of the equation; his job is to ensure the content is telling the best possible story, every step of the way. “Content is also key to creating presence. While the tech is no doubt important, no user is going to suspend disbelief if the experience is awful.”
From entertainment and social interaction to engineering and performing medical procedures, the more we understand, test and implement embodiment experiments, the more we can engineer experiences to feel real – and in turn be more effective.
With advances in research from the likes of Campbell and his team, along with advances in tech to make headsets slimmer, sensory feedback easier to implement and full-body holograms a reality, the sky is only virtually the limit.
This article is brought to you in association with Vodafone.
Cluster of Excellence CITEC is developing a system to support athletes and patients in physical rehabilitation
A new system in a virtual training room is helping users practice and improve sports exercises and other motor activities: six research groups from the Cluster of Excellence Cognitive Interaction Technology (CITEC) at Bielefeld University are working on the ICSPACE project to develop this virtual coaching space. CITEC is funding this large-scale research project with 1.6 million Euro and it will run until 2017. During a project presentation the researchers showcased what their system is capable of. Using the example of a squat exercise, they presented the new technology, which will help not only athletes, but also physical rehabilitation patients learn movement exercises and correct their mistakes. In a new “research_TV” report from Bielefeld University, the coordinators of the project also explain how their new system works.
When the user enters into the system, the first thing she sees is a virtual image of herself in the mirror of the virtual coaching space. “With the virtual image in the mirror, users can visually watch themselves and check how they are performing the exercises,” explains Professor Dr. Mario Botsch. The computer scientist heads the project together with computer scientist Professor Dr. Stefan Kopp and sport and cognitive scientist Professor Dr. Thomas Schack. Participating research groups on this large-scale project come from a wide range of disciplinary backgrounds, including biology, psychology, sports science, linguistics, and computer science.
To generate the image in the virtual mirror, the participant’s appearance is 3D-scanned in advance and transferred onto an artificial figure, an avatar. “In the virtual mirror, the user doesn’t just see herself from the front. The mirror can be turned, on demand, in order to see oneself from another side, which allows the user to better judge if the exercise is being performed correctly,” says Botsch, an expert in computer graphics. “With the help of virtual technology, things can be visualized that normally could not be seen,” says Stefan Kopp. Because of this, what is depicted in the virtual mirror – much in contrast to a real mirror – can be modified. “We can give the user visual training cues, such as highlighting individual body parts in color in the mirror,” says the cognitive scientist Kopp. When a user goes down while doing a squat, for instance, the thighs on her avatar appear red until she moves into the correct end position. The system also points out mistakes: “Certain mistakes made during movement exercises, such as bending one’s neck too far during a squat, are depicted in the mirror in an exaggerated way so as to draw attention to the error.” Users can also see a demonstration of the exercise: An additional half-transparent figure is overlaid on the user’s avatar in the mirror and performs the exercise together with the respective user. “The user can then simply follow the movements made by this second figure, which enables her to learn the correct sequence of movements,” says Kopp.
According to sport and cognitive scientist Thomas Schack, this training environment is the first of its kind in the world. “It is the only system I know of that, in comparison to other virtual systems, simulates and implements the technology of the entire training process, while at the same time adapting flexibly to the user’s actions,” says Schack. “Demonstrating exercises through modeling is part of this process. These models allow the individual exercises to be described and understood, but also serve as objectives for the coach and feedback for the user.” Looking ahead, the researchers would like to use this virtual coaching space to investigate how technical systems can best assist in practicing and performing sports exercises and other motor skills.
In the future, the new system is planned to be able to teach much more than just how to do a squat. “The planned range of activities will include gymnastics exercises, tai chi, yoga, or, for example, how to swing a golf club,” says Schack. In addition to athletes, this system is also appropriate for patients in physical rehabilitation. As Mario Botsch explains, “The system is suitable for coaching high-performance athletics, as well as for therapeutic approaches – for example, treating mobility problems due to an illness.”
The researchers believe that ICSPACE complements existing offerings in sports coaching. “We don’t want to put any coaches out of business,” says Mario Botsch. “But there is considerably more demand for motor learning than one might think. The technologies we are developing are also suitable for motivating elderly people to get active, and to this end the system could be scaled-down to even work at home on a Smart-TV.”
For more information online, please see:
• Video report on ICSPACE from research_tv [English subtitle]:www.youtube.com/watch?v=WDZ4Zgv_wzQ
• Overview of the major CITEC project ICSPACE: graphics.uni-bielefeld.de/research/icspace
• Overview of research projects at CITEC: cit-ec.de/en/content/projects
Prof. Dr. Mario Botsch, Bielefeld University
Faculty of Technology
Telephone: 0521 106-12146
Take the latest Kinect sensor, a PC, a high-definition monitor, and an onscreen avatar—and what do you have? The newest first-person shooter game for Xbox? Nope. What you have is one of most carefully designed physical therapy (PT) systems available: Kinapsys.
Created by RM ingénierie, a French company that designs and develops software for the healthcare industry, Kinapsys uses game-based exercises to provide comprehensive functional rehabilitation of PT patients. Patients simply stand or sit in front of the Kinect sensor and the monitor while they play games that entail movements that are tailored to each patient’s therapeutic needs. For example, a patient who has undergone knee ligament repair can play a game on a virtual walking trail. As the onscreen avatar strolls along, the patient must squat and move laterally to help the avatar avoid objects that hang from above or protrude from the side. Those movements are beneficial for restoring knee function.
The Kinect sensor captures the patient’s movements and transfers them to the avatar. More importantly, the sensor precisely tracks the position of the patient’s joints and compares his or her range of movement against prescribed goals that the therapist has entered into the system. At the end of the game, the patient receives a score that allows both patient and therapist to measure therapeutic progress accurately.
While the games are at the heart of the therapy, Kinapsys offers other interactive modalities that ensure that patients perform the exercises correctly. Here, too, Kinect-enabled interaction is an essential component. In every case, patients see their own image on screen, with their tracked joints superimposed. Depending on the therapist’s choice, patients might see a mirror image of themselves, which allows them to practice the moves and receive immediate feedback on their performance. Alternatively, they might see an avatar that provides feedback on the speed and rhythm of their exercise movements, or they might interact directly with a game interface.
Programmed with more than 400 specific exercises, Kinapsys allows the therapist to create a regimen customized for each patient at every stage of their therapy—from the earliest stages of rehabilitation through the reinforcement of reacquired skills. The system provides exercises and games that facilitate such goals as improved joint movement, muscle toning, gesture reprogramming, flexibility, and cardiovascular fitness. It also features programs that address the specific PT needs of patients with back problems and neurological damage from strokes. In addition, Kinapsys provides group therapy modules, and the system can be purchased in a mobile configuration, complete with a cart that lets the physical therapist transport Kinapsys to assisted living facilities, community centers, or anywhere that it’s needed.
But what really sets Kinapsys apart from traditional physical therapy is the Kinect sensor’s ability to track body movements precisely. This enables the system to measure and chart patient progress with far greater accuracy than ever, and allows the therapist to modify the regimen for maximum patient benefit.
The Kinect for Windows Team