New developments, based on the concept of wearable soft-robotic devices, make it possible to support impaired hand function during the performance of daily activities and intensive task-specific training. The wearable soft-robotic ironHand glove is such a system that supports grip strength during the performance of daily activities and hand training exercises at home.
This pilot randomized controlled clinical study explored the effect of prolonged use of the assistive ironHand glove during daily activities at home, in comparison to its use as a trainings tool at home, on functional performance of the hand.
In total, 91 older adults with self-perceived decline of hand function participated in this study. They were randomly assigned to a 4-weeks intervention of either assistive or therapeutic ironHand use, or control group (received no additional exercise or treatment). All participants performed a maximal pinch grip test, Box and Blocks test (BBT), Jebsen-Taylor Hand Function Test (JTHFT) at baseline and after 4-weeks of intervention. Only participants of the assistive and therapeutic group completed the System Usability Scale (SUS) after the intervention period.
Participants of the assistive and therapeutic group reported high scores on the SUS (mean = 73, SEM = 2). The therapeutic group showed improvements in unsupported handgrip strength (mean Δ = 3) and pinch strength (mean Δ = 0.5) after 4 weeks of ironHand use (p≤0.039). Scores on the BBT and JTHFT improved not only after 4 weeks of ironHand use (assistive and therapeutic), but also in the control group. Only handgrip strength improved more in the therapeutic group compared to the assistive and control group. No significant correlations were found between changes in performance and assistive or therapeutic ironHand use (p≥0.062).
This study showed that support of the wearable soft-robotic ironHand system either as assistive device or as training tool may be a promising way to counter functional hand function decline associated with ageing.
Hand function predominantly determines the quality of performance in activities of daily living (ADL) and work-related functioning. Older adults with age-related loss of muscle mass (i.e. sarcopenia)  and/or age-related diseases (e.g. stroke, arthritis) [2, 3] suffer from loss of hand function. As a consequence, they experience functional limitations, which affects independence in performing ADL [3–5].
An effective intervention for improving hand function of (stroke) patients should consist of several key aspects of motor learning, such as high-intensity and task-specificity in repetitive and functional exercises that are actively initiated by the patient him/herself [6, 7]. In a traditional rehabilitation setting, those kinds of interventions are performed with one-on-one attention from the healthcare professional for each patient. This might become problematic in the near future when the population of older adults with age-related diseases (e.g. stroke, rheumatoid arthritis) with hand function decline will rise, resulting in an increased need for healthcare professionals and a rise of healthcare costs . Therefore, new alternatives to provide intensive therapy for all patients are needed in the future.
New technological developments, such as robot-assisted hand training, have the potential to provide such intensive, repetitive and task-specific therapy. Several reviews [9–11] already showed positive results on motor function after robot-assisted training of the upper extremity. However, limiting factors of robot-assisted therapy are the need for supervision of a healthcare professional, the high costs of the devices and the limited availability of wearable devices for training at home . Furthermore, it is often not efficient in transferring the trained movements into daily situations . Therefore, the next generation robotic training approaches should pay substantial attention towards home-based rehabilitation and the functional nature of the exercise involved.
A new way of providing functional, intensive and task-specific hand training would involve using new technological innovations that enable support of the affected hand directly during the performance of ADL, based on the concept of a wearable robotic glove [13–18]. In this way, the affected hand can be used repeatedly and for prolonged periods of time during functional daily activities. These robotic gloves can use different human-robot interfaces to provide assistance for the affected hand, such as an EMG-controlled glove, a tendon driven glove, a glove controlled by force sensors etc. [13, 14, 16, 18, 19]. All these robotic gloves use soft and flexible materials to make such devices more lightweight and easy to use, accommodating wearable applications. This concept of a wearable soft-robotic glove allows persons with reduced hand function to use their hand(s) during a large variety of functional activities and may even turn performing daily activities into extensive training, independent from the availability of healthcare professionals. This is thought to improve hand function and patient’s independence in performing ADL.
Therefore, an easy to use and wearable soft-robotic glove (ironHand system), supporting grip strength and hand training exercises at home, was developed within the ironHand project . Previous studies have examined feasibility  and the orthotic effect of the ironHand system . In a first randomized controlled clinical study, the effect of prolonged use of such an assisting glove during ADL at home on functional performance of the hand was explored, in comparison to its use as a training tool at home.[…]
Fig 2. Overview of the ironHand system with assistive functionality (left panel) and therapeutic functionality (right panel). * Reprinted from Bioservo Technologies under a CC BY license, with permission from Bioservo Technologies, original copyright 2017. https://doi.org/10.1371/journal.pone.0220544.g002
The main purpose of the paper is development, implementation, and testing of a low cost portable system to assist partially paralyzed patients in their hand rehabilitation after strokes or some injures. Rehabilitation includes time consuming and repetitive exercises which are costly and demotivating as well as the requirements of clinic attending and direct supervision of physiotherapist. In this work, the system consists of a graphical user interface (GUI) on a smartphone screen to instruct and motivate the patients to do their exercises by themselves. Through the GUI, the patients are instructed to do a sequence of exercises step by step, and the system measures the electrical activities (electromyographic signals EMG) of the user’s forearm muscles by Myo armband. Depending on database, the system can tell whether the patients have done correct movements or not. If a correct movement is detected, the system will inform the user through the GUI and move to the next exercise. For preliminary results, the system was extensively tested on a healthy person.
Jarrass´e N., Proietti T., Crocher V., Robertson J., Sahbani A., Morel G. and Roby-Brami A. 2014 Robotic exoskeletons: a perspective for the rehabilitation of arm coordination in stroke patients Frontiers in human neuroscience8
Ho N., Tong K., Hu X., Fung K., Wei X., Rong W. and Susanto E. 2011 An emg-driven exoskeleton hand robotic training device on chronic stroke subjects: task training system for stroke rehabilitation Rehabilitation Robotics (ICORR), 2011 IEEE International Conference on. IEEE 1-5
Stein J., Narendran K., McBean J., Krebs K. and Hughes R. 2007 Electromyography-controlled exoskeletal upper-limb–powered orthosis for exercise training after stroke American journal of physical medicine & rehabilitation86 255-261
Bae J.-H., Kim Y.-M. and Moon I. 2012 Wearable hand rehabilitation robot capable of hand function assistance in stroke survivors Biomedical Robotics and Biomechatronics (BioRob), 2012 4th IEEE RAS & EMBS International Conference on. IEEE 1482-1487
Hasegawa Y., Mikami Y., Watanabe K., Firouzimehr Z. and Sankai Y. 2008 Wearable handling support system for paralyzed patient 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems 741-746 Sept
Sugar T. G., He J., Koeneman E. J., Koeneman J. B., Herman R., Huang H., Schultz R. S., Herring D. E., Wanberg J., Balasubramanian S., Swenson P. and Ward J. A. 2007 Design and control of rupert: A device for robotic upper extremity repetitive therapy IEEE Transactions on Neural Systems and Rehabilitation Engineering15 336-346 Sept
Sathiyanarayanan M. and Mulling T. 2015 Map navigation using hand gesture recognition: A case study using myo connector on apple maps Procedia Computer Science58 50-57 second International Symposium on Computer Vision and the Internet (VisionNet15). [Online]. Available:http://www.sciencedirect.com/science/article/pii/S1877050915021195
Wearable robots that can anticipate and react to users’ movement in real time could dramatically improve mobility assistance and rehabilitation tools.
Wearable robots are programmable body-worn devices, or exoskeletons, that are designed to mechanically interact with the user. Their purpose is to assist or even substitute human motor function for people who have severe difficulty moving or walking.
The BIOMOT project, completed in September 2016, has helped to advance this emerging field by demonstrating that personalised computational models of the human body can effectively be used to control wearable exoskeletons. The project has identified ways of achieving improved flexibility and autonomous performance, which could assist in the use of wearable robots as mobility assistance and rehabilitation tools.
‘An increasing number of researchers in the field of neurorehabilitation are interested in the potential of these robotic technologies for clinical rehabilitation following neurological diseases,’ explains BIOMOT project coordinator Dr. Juan Moreno from the Spanish Council for Scientific Research (CSIC). ‘One reason is that these systems can be optimised to deliver diverse therapeutic interventions at specific points of recuperation or care.’
However, a number of factors have limited the widespread market adoption of wearable robots. Moreno and his team identified a need for wearable equipment to be more compact and lightweight, and better able anticipate and detect the intended movements of the wearer. In addition, robots needed to become more versatile and adaptable in order to aid people in a variety of different situations; walking on uneven ground, for example, or approaching an obstacle.
In order to address these challenges, the project developed robots with real-time adaptability and flexibility by increasing the symbiosis between the robot and the user through dynamic sensorimotor interactions. A hierarchical approach to these interactions was taken, allowing the project team to apply different layers for different purposes. This means in effect that an exoskeleton can be personalised to an individual user.
‘Thanks to this framework, the BIOMOT exoskeleton can rely on mechanical and bioelectric measurements to adapt to a changing user or task condition,’ says Moreno. ‘This leads to improved robotic interventions.’
Following theoretical and practical work, the project team then tested these prototype exoskeletons with volunteers. A key technical challenge was how to combine a robust and open architecture with a novel wearable robotic system that can gather signals from human activity. ‘Nonetheless, we succeeded in investigating for the first time the potential of automatically controlling human-robot interactions in order to enhance user compliance to a motor task,’ says Moreno. ‘Our research with healthy humans showed such positive and promising results that we are keen to continue validation with both stroke and spinal cord injury patients.’
Indeed, Moreno is confident that the success of the project will open up potential new research avenues. For example, the results will help scientists to develop computational models for rehabilitation therapies, and better understand human movement in more detail.
‘In the project we also defined novel techniques to evaluate and benchmark performances of wearable exoskeletons,’ says Moreno. ‘Further innovation projects are planned by consortium members to follow up on this research, and to exploit developments in the field of human motion capture, human-machine interaction and adaptive control.’
Flint Rehab announces the launch of MiGo, a wearable activity tracker specifically designed for stroke survivors. The device makes its official debut at the 2019 Consumer Electronics Show in Las Vegas.
MiGo is designed to track upper extremity activity — in addition to walking — and is optimized for the movement patterns performed by individuals with stroke. The device is accompanied by a smartphone app that provides motivational support through digital coaching, progressive goal setting, and social networking with other stroke survivors, according to the company in a media release.
“Most wearable fitness trackers are designed to help people get into shape. MiGo is a new type of wearable that helps people regain their independence after a stroke,” says Dr Nizan Friedman, co-founder and CEO of Irvine, Calif-based Flint Rehab, in the release.
“Traditionally, innovation in medical technology has been limited by what insurance companies are willing to cover. As a consumer-level digital health technology, MiGo avoids these constraints, empowering stroke survivors to take their recovery into their own hands.”
A common outcome of stroke is hemiparesis, or impaired movement on one side of the body. One of the leading causes of this lifelong disability is a phenomenon called “learned non-use,” where stroke survivors neglect to use their impaired arm or leg, causing their brain to lose the ability to control those limbs altogether.
MiGo directly addresses the problem of learned non-use by motivating stroke survivors to use their impaired side as much as possible. Using deep-learning algorithms, MiGo accurately tracks how much the wearer is using their impaired side, providing them with an easy-to-understand rep count throughout the day.
MiGo also provides an intelligent activity goal that updates every day based on the wearer’s actual movement ability, ensuring every user stays continuously challenged at the level appropriate for them. Then, the device acts as the wearer’s personal cheerleader, giving them rewards and positive feedback right on their wrist as they work to hit their daily goal, the release explains.
“Suffering a stroke is a traumatic, life-changing event. Many survivors do not have the proper support network to deal with the event, and they may find it difficult to relate with friends and family who don’t understand what they are going through,” states Dan Zondervan, co-founder and vice president of Flint Rehab.
“Using the MiGo app, users can join groups to share their activity data and collaborate with other stroke survivors to achieve group goals. Group members can also share their experiences and offer encouraging support to each other — right in the app,” he adds.
Brain-Computer Interface (BCI) combined with assistive robots has been developed as a promising method for stroke rehabilitation. However, most of the current studies are based on complex system setup, expensive and bulky devices. In this work, we designed a wearable Electroencephalography(EEG)-based BCI system for hand function rehabilitation of the stroke. The system consists of a customized EEG cap, a small-sized commercial amplifer and a lightweight hand exoskeleton. In addition, visualized interface was designed for easy use. Six healthy subjects and two stroke patients were recruited to validate the safety and effectiveness of our proposed system. Up to 79.38% averaged online BCI classification accuracy was achieved. This study is a proof of concept, suggesting potential clinical applications in outpatient environments.
1. G. Pfurtscheller and C. Neuper , “Motor imagery and direct brain-computer communication”, Proceedings of the IEEE, vol. 89, no. 7, pp. 1123-1134, 2001.
2. E. Donchin , K. Spencer and R. Wijesinghe , “The mental prosthesis: assessing the speed of a P300-based brain-computer interface”, IEEE Transactions on Rehabilitation Engineering, vol. 8, no. 2, pp. 174-179, 2000.
4. Xiaorong Gao , Dingfeng Xu , Ming Cheng and Shangkai Gao , “A bci-based environmental controller for the motion-disabled”, IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 11, no. 2, pp. 137-140, 2003.
7. M. A. Cervera , S. R. Soekadar , J. Ushiba et al “Brain-computer interfaces for post-stroke motor rehabilitation: a meta-analysis”, Annals of Clinical and Translational Neurology, vol. 5, no. 5, pp. 651-663, 2018.
8. K. Ang , K. Chua , K. Phua et al “A Randomized Controlled Trial of EEG-Based Motor Imagery Brain-Computer Interface Robotic Rehabilitation for Stroke”, Clinical EEG and Neuroscience, vol. 46, no. 4, pp. 310-320, 2014.
9. N. Bhagat , A. Venkatakrishnan , B. Abibullaev et al “Design and Optimization of an EEG-Based Brain Machine Interface (BMI) to an Upper-Limb Exoskeleton for Stroke Survivors”, Frontiers in Neuroscience, vol. 10, pp. 122, 2016.
10. J. Webb , Z. G. Xiao , K. P. Aschenbrenner , G. Herrnstadt , and C. Menon , “Towards a portable assistive arm exoskeleton for stroke patient rehabilitation controlled through a brain computer interface”, in Biomedical Robotics and Biomechatronics (BioRob), 2012 4th IEEE RAS & EMBS International Conference, pp. 1299-1304, 2012.
11. A. L. Coffey , D. J. Leamy , and T. E. Ward , “A novel BCI-controlled pneumatic glove system for home-based neurorehabilitation”, in Engineering in Medicine and Biology Society (EMBC), 2014 36th Annual International Conference of the IEEE, pp. 3622-3625, 2014.
12. D. Bundy , L. Souders , K. Baranyai et al “Contralesional Brain-Computer Interface Control of a Powered Exoskeleton for Motor Recovery in Chronic Stroke Survivors”, Stroke, vol. 48, no. 7, pp. 1908-1915, 2017.
13. X. Shu , S. Chen , L. Yao et al “Fast Recognition of BCI-Inefficient Users Using Physiological Features from EEG Signals: A Screening Study of Stroke Patients”, Frontiers in Neuroscience, vol. 12, pp. 93, 2018.
15. G. Schalk , D. McFarland , T. Hinterberger , N. Birbaumer and J. Wolpaw , “BCI2000: A General-Purpose Brain-Computer Interface (BCI) System”, IEEE Transactions on Biomedical Engineering, vol. 51, no. 6, pp. 1034-1043, 2004.
16. M. H. B. Azhar , A. Casey , and M. Sakel , “A cost-effective BCI assisted technology framework for neurorehabilitation”, The Seventh International Conference on Global Health Challenges, 18th-22nd November, 2018. (In Press)
17. C. M. McCrimmon , M. Wang , L. S. Lopes et al “A small, portable, battery-powered brain-computer interface system for motor rehabilitation”, Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pp. 2776-2779, 2016.
18. J. Meng , B. Edelman , J. Olsoe et al “A Study of the Effects of Electrode Number and Decoding Algorithm on Online EEG-Based BCI Behavioral Performance”, Frontiers in Neuroscience, vol. 12, pp. 227, 2018.
Many were shocked when “Games of Thrones” actress Emily Clarke recently disclosed in The New Yorker that she suffered from two brain aneurysms in 2011 and 2013, respectively. Owing to her bravery and resilient spirit, her recovery was quick enough to resume normal work life within weeks of the operation.
Clarke, however, belongs to the exclusive 10 percent of survivors who recover completely after experiencing a stroke. On the other hand, 40 percent of stroke patients usually have moderate to severe impairments according to statistics published by Healthline.
Strokes are common in the U.S. as 1 in 19 deaths are caused by this. American Stroke Association’s datarevealed 795,000 Americans suffer from strokes every year. Of which, 185,000 experience recurrent attacks as reintegrating into society with motor skills intact poses a serious challenge.
Strokes are a leading cause of disability in the U.S. One of the main disabilities stroke survivors deal with is known as apraxia, which is a neurological disorder that prevents swift body movements. Other disabilities include the one-sided paralysis of the body known as hemiplegiaand dysphagia, which refers to damage to part of the brain controlling swallowing.
Generally, rehabilitation and therapy help, but the field requires more research and innovation. There are a few scientists trying to come up with more novel methods of therapy using the power of technology. For instance, U.S. Food and Drug Administration (FDA) approved the H200 Wireless Hand Rehabilitation System, which is available to purchase in commercial markets.
It is a wireless device stimulating muscles in the forearm and hand. As of now, this is the only commercially available product for the hand muscles according to this research paper reviewing devices for stroke rehabiliation for the lower limbs. On the contrary, Biomove 3000, Hand Mentor PRO, mPower 1000 and NeuroMove are other devices developed in the past that are still available to purchase.
There is more hope and promise for survivors losing feeling in their arms as extensive research is in progress. Another new device meant to be worn like a glove was developed as a prototype by researchers in Stanford University and Georgia Tech.
What is the latest prototype?
The innovation was the brainchild of a graduate student from Georgia Tech, Caitlyn Seim. She invented the glove to gently stimulate nerves for several hours a day to improve sensation in the arms and hands. The vibrating glove can be worn during normal day-to-day activities like shopping or listening to music.
Once the prototype was made, Seim showed it to her Stanford University professors to get help with further research and to eventually push the device to clinical testing. Maarten Lansberg, an associate professor of neurology, and Allison Okamura, a professor mechanical engineering at Stanford University, are on board with the project. After the glove showed positive results in pilot studies, the trio received a grant from the prestigious Wu Tsai Neurosciences Institute to expand their research.
Currently, the researchers are in the process of improving the device for more comfort and accessibility before starting clinical tests again.The long-term vision is to build a device capable of helping stroke survivors restore lost function in their arms and hands.
The trio are united by a strong curiosity and passion to help stroke survivors. Lansberg had been treating stroke patients as a medical doctor, and Okamura has done research on touch-based devices with the intention to help such patients.
Seim’s interest stems from building wearable computing devices like virtual goggles and smartwatches, but she now intends to use her expertise to benefit health care and accessibility. She will be joining Stanford as a postdoctoral fellow in the fall, and she’ll continue working on the glove.
The most obvious sign someone has survived a stroke is usually some trouble speaking or walking. But another challenge may have an even greater impact on someone’s daily life: Often, stroke survivors lose sensation and muscle control in one arm and hand, making it difficult to dress and feed themselves or handle everyday objects such as a toothbrush or door handle.
Now, doctors and engineers at Stanford and Georgia Tech are working on a novel therapy that could help more stroke survivors regain the ability to control their arms and hands – a vibrating glove that gently stimulates the wearer’s hand for several hours a day.
Caitlyn Seim, a graduate student at Georgia Tech, started the project in the hope that the glove’s stimulation could have some of the same impact as more traditional exercise programs. After developing a prototype, she approached Stanford colleagues Maarten Lansberg, an associate professor of neurology, and Allison Okamura, a professor of mechanical engineering, in order to expand her efforts. With help from a Wu Tsai Neurosciences Institute Neuroscience seed grant, the trio are working to improve on their prototype glove and bring the device closer to clinical testing.
“The concept behind it is that users wear the glove for a few hours each day during normal daily life – going to the supermarket or reading a book at home,” said Seim. “We are hoping that we can discover something that really helps stroke survivors.”
Reaching for new stroke treatments
Seim, Lansberg and Okamura’s goal is a tall order. Despite some individual success stories, the reality is that most stroke patientsstruggle to regain the ability to speak, move around and take good care of themselves.
“Stroke can affect patients in many ways, including causing problems with arm function, gait, vision, speech and cognition,” Lansberg said, yet despite decades of research, “there are essentially no treatments that have been proven to help stroke patients recover these functions.”
It was in that context that all three researchers independently started thinking about what they could do to improve the lives of people who’ve survived strokes. As the medical doctor in the bunch, Lansberg had already been treating stroke patients for years and has helped lead the Stanford Stroke Collaborative Action Network, or SCAN, another project of the Wu Tsai Neurosciences Institute. Okamura, meanwhile, has focused much of her research on haptic or touch-based devices, and in the last few years her lab has spent more and more time thinking about how to use those devices to help stroke survivors.
“Rehabilitation engineering provides a unique opportunity for me to work directly with the patients who are affected by our research,” Okamura said. “The potential to translate the kind of technology relatively quickly to a commercial product that can reach a large number of stroke patients in need of therapy is also very exciting.”
For her part, Seim’s interest in stroke stems from an interest in wearable computing devices – but rather than build more virtual reality goggles and smartwatches, Seim said she wants to apply wearable computing to the areas of health and accessibility, “areas which have some of the most compelling problems to me,” she said.
Growing a new idea
With that ambition in mind, Seim built a prototype vibrating glove that she hoped would stimulate nerves and improve both sensation and function in stroke survivors’ hands and arms. After collecting some promising initial data, Seim reached out to the Stanford team.
“Stanford has SCAN and StrokeNet, along with a community of interdisciplinary engineering and computing research, so I reached out to Maarten, and he was very supportive,” Seim said.
Now, Seim, Lansberg and Okamura are revising the glove’s design to improve its function and to add elements for comfort and accessibility. Then, they’ll begin a new round of clinical tests at Stanford.
Long term, the hope is to build something that helps stroke survivors recover some of the functions they have lost in their hands and arms. And if initial tests work out, Lansberg said, it’s possible the same basic idea could be applied to treat other complications associated with stroke.
“The glove is an innovative idea that has shown some promise in pilot studies,” Lansberg said. “If proven beneficial for patients with impaired arm function, it is conceivable that variations of this type of therapy could be developed to treat, for example, patients with impaired gait.”
This paper presents the design and development of a highly articulated, continuum, wearable, fabric-based Soft Poly-Limb (fSPL). This fabric soft arm acts as an additional limb that provides the wearer with mobile manipulation assistance through the use of soft actuators made with high-strength inflatable fabrics. In this work, a set of systematic design rules is presented for the creation of highly compliant soft robotic limbs through an understanding of the fabric based components behavior as a function of input pressure. These design rules are generated by investigating a range of parameters through computational finite-element method (FEM) models focusing on the fSPL’s articulation capabilities and payload capacity in 3D space. The theoretical motion and payload outputs of the fSPL and its components are experimentally validated as well as additional evaluations verify its capability to safely carry loads 10.1x its body weight, by wrapping around the object. Finally, we demonstrate how the fully collapsible fSPL can comfortably be stored in a soft-waist belt and interact with the wearer through spatial mobility and preliminary pick-and-place control experiments.
Recovering after a stroke isn’t easy, but Neofect is here to help patients track their rehabilitation progress with an innovative wearable solution.
At CES 2019, the company exhibited its Rapael Smart Glove, a high-tech rehab device that helps stroke patients improve their hand movements. The device also syncs with an app, where patients can play rehabilitation games and track milestones.
Neofect didn’t disclose a price for the Rapael Smart Glove, but customers can go on the company’s website to buy it. The Rapael Smart Glove is also available for clinics that need stroke rehabilitation equipment.
Using the Rapael Smart Glove is very easy: Gently slide on the device, connect to the Rapael App with a smartphone or tablet, and play a variety of rehabilitation games. The app’s fun games include virtual tennis matches and house painting, and they’re available in different levels to balance challenge and motivation. Plus, the Rapael App collects practice data for patients, so they can track their hand recovery progress.
With the Rapael Smart Glove, patients can practice hand exercises and improve dexterity over time. An advantage of the Rapael Smart Glove is that it can help stroke patients who might not have immediate access to hospitals or physical therapy facilities, so they can work on their hand movements without leaving home.
“We aim to help patients all around the world including, but not limited to, those unable to receive appropriate treatment due to economic or geographic reasons,” says Neofect’s website. “By providing rehab training products and services that are available anytime and anywhere, we are committed to improving patient’s rehab experiences and quality of life.”
New technologies mean we won’t just see and hear digital information. We’ll also feel it.
By Matthew Hutson 12.20.2018
In Steven Spielberg’s 2018 film Ready Player One, based on the 2011 book by Ernest Cline, people enter an immersive world of virtual reality called the OASIS. What was most gripping about the futuristic tech in this sci-fi movie was not the VR goggles, which don’t seem so far off from the headsets currently sold by Oculus, HTC and others. It was the engagement of a sense beyond sight and sound: touch.
Characters wore gloves with feedback that let them feel the imaginary objects in their hands. They could upgrade to full body suits that reproduced the force of a punch to the chest or the stroking of a caress. And yet these capabilities, too, might not be as far off as we imagine.
We rely on touch — or “haptic” — information continuously, in ways we don’t even consciously recognize. Nerves in our skin, joints, muscles and organs tell us how our bodies are positioned, how tightly we’re holding something, what the weather is like, or that a loved one is showing affection through a hug. Around the world, engineers are now working to recreate realistic touch sensations, for video games and more. Engaging touch in human-computer interactions would enhance robotic control, physical rehabilitation, education, navigation, communication and even online shopping.
“In the past, haptics has been good at making things noticeable, with vibration in your phone or the rumble packs in gaming controllers,” says Heather Culbertson, a computer scientist at the University of Southern California. “But now there’s been a shift toward making things that feel more natural, that more mimic the feel of natural materials and natural interactions.”
Take surgical robots, which allow doctors to operate from the other side of the world, or to manipulate tools too small or in spaces too tight for their hands. Numerous studies have shown that adding haptic feedback to the control of these robots increases accuracy and reduces tissue damage and operation time. Ones with haptic feedback also allow doctors to train on patients that exist only in virtual reality while getting the feeling of actual cutting and suturing. One of Culbertson’s students is currently developing dental simulators so that a dental student’s first mistaken drilling is not on a real tooth.
Getting a feel for what the robot under your command is doing would also be helpful for defusing bombs or extracting people from collapsed buildings. Or for repairing a satellite without suiting up for a spacewalk. Even Disney has looked into haptic telepresence robots, for safe human-robot interactions. They developed a system that has pneumatic tubes connecting a humanoid’s robotic arms with a mirror set of arms for a human to grasp. The person can manipulate the mirror bot to cause the first bot to hold a balloon, pick up an egg or pat a child on the cheeks.
On a smaller scale, the lab of roboticist Jamie Paik at the Swiss Federal Institute of Technology in Lausanne (EPFL) has developed a portable haptic interface called Foldaway. Devices about the size and shape of a square drink coaster have three hinged arms that pop up, meeting in the middle. (Stefano Mintchev, a postdoc in the lab, calls them “miniaturized origami robots.”) A small plastic handle can be stuck on top where the arms meet, creating a joystick that acts in three dimensions — and the arms push back, to give the user a sense of the objects they’re pushing against. In demos, the team has used the devices to control an aerial drone, squeeze virtual objects and feel the shape of virtual human anatomy.
There are certain challenges in grasping haptics that might seem insurmountable — for instance, how do you provide a sense of weight when grabbing and lifting weightless digital objects? But by studying neuroscience, engineers have managed to find a few workarounds. Culbertson and colleagues developed a device called Grabity for the gravity problem. It’s a kind of vise that one grips and squeezes to pick up virtual objects. Simply by vibrating in certain ways, it can produce the illusion of weight and inertia.
But “fooling the brain only goes so far,” says Ed Colgate, a mechanical engineer at Northwestern University who works in haptics. It’s sometimes easy to break haptic illusions. To his mind, in the long run engineers will need to recreate the physics of the real world — weight and all — as faithfully as possible. “That’s a really hard problem.”
Graspable devices often take advantage of kinesthetic sensations: feelings of movement, position and force mediated by nerves not just in our skin but also in our muscles, tendons and joints. Wearable devices, on the other hand, usually rely on tactile sensations — pressure, friction or temperature — mediated by nerves in the skin.
A variety of experimental devices are worn on the finger, pressing against the finger pad with different degrees of force as one touches objects in virtual reality. But a recent device provides the same sort of feedback without covering the finger pad. Instead, it’s worn where one might wear a ring and contains motors that stretch the skin underneath. That keeps the fingers free to interact with real-world objects while still sensing “virtual” ones — a useful feature for both games and serious applications.
In one test, a person could hold a real piece of chalk and feel pressure as they “wrote” on a virtual chalkboard by virtue of a haptic illusion: As they simultaneously saw the chalk contact the board and felt their skin stretched, they were fooled into feeling pressure in their fingertips.
More commonly, wearable haptic devices communicate through vibration. Culbertson’s lab, for example, is working on a wristband that guides the wearer by vibrating in the direction he or she needs to turn. And NeoSensory, a company founded by Stanford neuroscientist David Eagleman, is developing a vest with 32 vibratory motors that was showcased in an episode of HBO’s sci-fi series Westworld where it ostensibly helped characters identify the direction of approaching enemies.
One of the vest’s first real applications will be to translate sound into tactile sensation to make spoken language more intelligible to people with profound or complete hearing loss. Eagleman is also working on translating aspects of the visual world into vibrations for people who are blind. Other efforts involve more abstract information such as market and environmental data — instead of a grid indicating where things are spatially, a complex pattern of vibrations might indicate the prices of a dozen stocks.
Vibrating motors can be bulky, so some labs are developing more comfortable solutions. Paik’s lab at EPFL is working on a soft pneumatic actuator (SPA) skin — a sheet of flexible silicone less than 2 millimeters thick that’s dotted with tiny air pockets. They can be inflated and deflated independently dozens of times per second and thereby act as pixels — or “taxels,” for tactile elements — creating a grid of sensation. They might provide feelings of the kind the suits offer in Ready Player One, or feedback on the positioning of robots or prosthetic limbs. SPA skin is also embedded with sensors made of a new, corrosion-resistant metal alloy that allows the same skin to be used for computer input when the user squeezes it.
An even thinner haptic film — less than half a millimeter thick — is also in the offing, created by Novasentis and made of a new form of polyvinylidene fluoride plastic that balances strength, flexibility and electrical responsiveness. When the film is layered on one side of a sheet of flexible material and an electrical charge is applied, the film contracts and flexes the sheet, applying pressure against the skin. Novasentis is now providing the material to device manufacturers who are putting it into gloves for virtual reality and gaming.
“You can distinguish between water and sand and rock,” says Sri Peruvemba, the company’s vice president of marketing. VR designers could also create more abstract representations, such as sensation-delivered messages about the state of a game. “We can create a whole haptic language with our technology,” Peruvemba says.
Vibrations can produce another kind of haptic illusion: the sensation of pulling. If a device that vibrates back and forth parallel to the skin’s surface moves quickly in one direction and slowly back the other way, many times a second, it feels as if it’s tugging the skin in the first direction.
When users don Flyjacket, a wearable haptic device, they can control the flight path of aerial drones with their arms and torso and feel the pushback of gusts of wind. Take a look.
CREDIT: EPFL LIS VIA YOUTUBE
While most wearables use tactile sensation, they can also use the muscle-joint-tendon input of kinesthetic sensation. Engineers have developed robotic exoskeletons, a kind of scaffolding strapped to the body with sensors and motors, that can help paralyzed people walk, give soldiers super strength, and let people control robots at a distance. A lab at EPFL has developed the FlyJacket, which one wears with arms straight out to the sides, connected by pistons to the waist. It doesn’t look especially fly, but it allows people to control the flight of aerial drones by moving their arms and twisting their torsos. When the drone feels a gust of wind, you do too.
The final category of devices are touchable interfaces, such as smartphone screens that give a little bump when you click on an app. Culbertson’s work pushes beyond simple bumps and buzzes. She simulates texture on surfaces using what she calls “data-driven haptics.” Instead of writing complicated algorithms or physics models to generate vibrations that simulate real ones, she records what happens when something is dragged over different fabrics or other materials at different speeds and pressures. Then she has a surface play back the vibrations when a pen is dragged across it. Applications could include online shopping and virtual museums.
Touchable surfaces also allow types of illusions. For instance, Culbertson says, playing the sound of a button clicking as one taps a picture of a button makes it feel as if the button is actually clicking. Or making the screen appear to deform under one’s finger can make it feel softer. People construct perception by tying together sight, sound, touch, taste and smell — and, as Culbertson says, “It’s really easy to fool your brain if you have a mismatch between your senses.”
Realistic haptics for VR may forever be clunky and expensive. Or technology may eventually make Ready Player One look quaint. In either case, as we can see with baby steps such as the pervasive rumbling of video game controllers and endlessly vibrating phones and watches, haptic devices are here to stay, adding a new dimension to our digital lives.
Matthew Hutson is a freelance science writer in New York City who writes for Science, the Atlantic, Scientific American and other publications. He’s the author of The 7 Laws of Magical Thinking. He tweets at @SilverJacket.