Archive for category Robotics

[BLOG POST] The Brain’s Sensational Juggling Act

(Credit: Shutterstock)

You’re bombarded with sensory information every day — sights, sounds, smells, touches and tastes. A constant barrage that your brain has to manage, deciding which information to trust or which sense to use as a backup when another fails. Understanding how the brain evaluates and juggles all this input could be the key to designing better therapies for patients recovering from stroke, nerve injuries, or other conditions. It could also help engineers build more realistic virtual experiences for everyone from gamers to fighter pilots to medical patients.

Now, some researchers are using virtual reality (VR) and even robots to learn how the brain pulls off this juggling act.

Do You Believe Your Eyes?

At the University of Reading in the U.K., psychologist Peter Scarfe and his team are currently exploring how the brain combines information from touch, vision, and proprioception – our sense of where our body is positioned – to form a clear idea of where objects are in space.

Generally, the brain goes with whichever sense is more reliable at the time. For instance, in a dark room, touch and proprioception trump vision. But when there’s plenty of light, you’re more likely to believe your eyes. Part of what Scarfe’s crew hopes to eventually unravel is how the brain combines information from both senses and whether that combination is more accurate than touch or sight alone. Does the brain trust input from one sense and ignore the other, does it split the difference between the two, or does it do something more complex?

To find out, the team is using a VR headset and a robot called Haptic Master.

While volunteers wear the VR headset, they see four virtual balls – three in a triangle formation and one in the center. They can also reach out and touch four real spheres that appear in the same place as the ones they see in VR: the three in the triangle formation are just plastic and never move, but the fourth is actually a ball bearing at the end of Haptic Master’s robot arm. Researchers use the robot to move this fourth ball between repetitions of the test. Think of the three-ball-triangle as a flat plane in space. The participant has to decide whether the fourth ball is higher or lower than the level of that triangle.

It’s a task that requires the brain to weigh and combine information from multiple senses to decide where the fourth ball is in relation to the other three. Participants get visual cues about the ball’s location through the VR headset, but they also use their haptic sense – the combination of touch and proprioception – to feel where the ball is in space.

The VR setup makes it easier to control the visual input and make sure volunteers aren’t using other cues, like the location of the robot arm or other objects in the room, to make their decisions.

Collectively, volunteers have performed this task hundreds of times. Adams and his colleagues are looking at how accurate the results are when the participant used only their eyes, only their haptic sense, or both senses at once. The team is then comparing those results to several computer models, each predicting how a person would estimate the ball’s position if their brain combined the sensory information in different ways.

So far, the team needs more data to learn which model best describes how the brain combines sensory cues. But they say that their results, and those of others working in the field, could one day help design more accurate haptic feedback, which could make interacting with objects in virtual reality feel more realistic.

On Shaky Footing

Anat Lubetzky, a physical therapy researcher at New York University, is also turning to VR. She uses the burgeoning technology to study how our brains weigh different sensory input to help us when things get shaky — specifically, if people rely on their sense of proprioception or their vision to keep their balance.

Conventional wisdom in sports medicine says that standing on an uneven surface is a good proprioception workout for patients in rehabilitation after an injury. That’s because it forces your somatosensory system, the nerves involved in proprioception, to work harder. So if your balance is suffering because of nerve damage, trying to stabilize yourself while standing on an uneven surface, like a bosu ball, should help.

But Lubetzky’s results tell a different story.

In the lab, Lubetzky’s subjects strap on VR headsets and stand on either a solid floor or an unsteady surface, like a wobble board. She projects some very subtly moving dots onto the VR display and uses a pressure pad on the floor to measure how participants’ bodies sway.

It turns out, when people stand on an unstable surface, they’re more likely to sway in time with the moving dots. But on a stable surface, they seem to pay less attention to the dots.

So rather than working their somatosensory systems harder, it seems people use their vision to look for a fixed reference point to help keep them balanced. In other words, the brain switches from a less reliable sense to a more reliable one, a process called sensory weighting.

Ultimately, Lubetzky hopes her VR setup could help measure how much a patient with somatosensory system damage relies on their vision. This knowledge, in turn, could help measure the severity of the problem so doctors can design a better treatment plan.

As VR gets more realistic and more immersive – partly thanks to experiments like these – it could offer researchers an even more refined tool for picking apart what’s going on in the brain.

Says Lubetzky, “It’s been a pretty amazing revolution.”

Source: The Brain’s Sensational Juggling Act – D-brief

Advertisements

, ,

Leave a comment

[ARTICLE] Effect of Robot-Assisted Game Training on Upper Extremity Function in Stroke Patients – Full Text

ObjectiveTo determine the effects of combining robot-assisted game training with conventional upper extremity rehabilitation training (RCT) on motor and daily functions in comparison with conventional upper extremity rehabilitation training (OCT) in stroke patients.

MethodsSubjects were eligible if they were able to perform the robot-assisted game training and were divided randomly into a RCT and an OCT group. The RCT group performed one daily session of 30 minutes of robot-assisted game training with a rehabilitation robot, plus one daily session of 30 minutes of conventional rehabilitation training, 5 days a week for 2 weeks. The OCT group performed two daily sessions of 30 minutes of conventional rehabilitation training. The effects of training were measured by a Manual Function Test (MFT), Manual Muscle Test (MMT), Korean version of the Modified Barthel Index (K-MBI) and a questionnaire about satisfaction with training. These measurements were taken before and after the 2-week training.

ResultsBoth groups contained 25 subjects. After training, both groups showed significant improvements in motor and daily functions measured by MFT, MMT, and K-MBI compared to the baseline. Both groups demonstrated similar training effects, except motor power of wrist flexion. Patients in the RCT group were more satisfied than those in the OCT group.

ConclusionThere were no significant differences in changes in most of the motor and daily functions between the two types of training. However, patients in the RCT group were more satisfied than those in the OCT group. Therefore, RCT could be a useful upper extremity rehabilitation training method.

INTRODUCTION

stroke is a central nervous system disease caused by cerebrovascular problems such as infarction or hemorrhage. Stroke may lead to impairment of various physical functions, including hemiplegia, language disorder, swallowing disorder or cognitive disorder, according to the location and degree of morbidity [1]. Among these, hemiplegia is a common symptom occurring in 85% of stroke patients. In particular, upper extremity paralysis is more frequent and requires longer recovery time than lower extremity paralysis [23]. To maintain the basic functions of ordinary life, the use of the upper extremities is essential; therefore, upper extremity paralysis commonly causes problems in performing the activities of daily living [2].

Robot-assisted rehabilitation treatment has recently been widely investigated as an effective neurorehabilitation approach that may augment the effects of physical therapy and facilitate motor recovery [4]. Robot-assisted rehabilitation treatments have been developed in recent decades to reduce the expenditure of therapists’ effort and time, to reproduce accurate repetitive motions and to interact with force feedback [56]. The most important advantage of using robot-assisted rehabilitation treatment is the ability to deliver high-dosage and high-intensity training [7].

In rehabilitation patients may find such exercises monotonous and boring, and may lose motivation over time [8]. Upper extremity rehabilitation training using video games, such as Nintendo Wii games and the PlayStation EyeToy games, enhanced upper extremity functions and resulted in greater patient satisfaction than conventional rehabilitation treatment [910111213].

The objective of this study was to determine the effects of combining robot-assisted game training with conventional upper extremity rehabilitation training (RCT) on motor and daily functions in comparison to conventional upper extremity rehabilitation training (OCT) in stroke patients. This study was a randomized controlled trial and we evaluated motor power, upper extremity motor function, daily function and satisfaction. […]

Continue —> KoreaMed Synapse

Fig. 1. (A) Neuro-X, an upper extremity rehabilitation robot, consisting of a video monitor, a robot arm and a computer. (B) The patient performing robot-assisted game training with the upper extremity rehabilitation robot.

, , , , , , ,

Leave a comment

[Abstract] Robot-assisted mirroring exercise as a physical therapy for hemiparesis rehabilitation

Abstract:

The paper suggests a therapeutic device for hemiparesis that combines robot-assisted rehabilitation and mirror therapy. The robot, which consists of a motor, a position sensor, and a torque sensor, is provided not only to the paralyzed wrist, but also to the unaffected wrist to induce a symmetric movement between the joints. As a user rotates his healthy wrist to the direction of either flexion or extension, the motor on the damaged side rotates and reflects the motion of the normal side to the symmetric angular position. To verify performance of the device, five stroke patients joined a clinical experiment to practice a 10-minute mirroring exercise. Subjects on Brunnstrom stage 3 had shown relatively high repulsive torques due to severe spasticity toward their neutral wrist positions with a maximum magnitude of 0.300kgfm, which was reduced to 0.161kgfm after the exercise. Subjects on stage 5 practiced active bilateral exercises using both wrists with a small repulsive torque of 0.052kgfm only at the extreme extensional angle. The range of motion of affected wrist increased as a result of decrease in spasticity. The therapeutic device not only guided a voluntary exercise to loose spasticity and increase ROM of affected wrist, but also helped distinguish patients with different Brunnstrom stages according to the size of repulsive torque and phase difference between the torque and the wrist position.

Source: Robot-assisted mirroring exercise as a physical therapy for hemiparesis rehabilitation – IEEE Conference Publication

, , , , , , , , ,

Leave a comment

[Abstract] EEG-guided robotic mirror therapy system for lower limb rehabilitation – IEEE Conference Publication

Abstract:

Lower extremity function recovery is one of the most important goals in stroke rehabilitation. Many paradigms and technologies have been introduced for the lower limb rehabilitation over the past decades, but their outcomes indicate a need to develop a complementary approach. One attempt to accomplish a better functional recovery is to combine bottom-up and top-down approaches by means of brain-computer interfaces (BCIs). In this study, a BCI-controlled robotic mirror therapy system is proposed for lower limb recovery following stroke. An experimental paradigm including four states is introduced to combine robotic training (bottom-up) and mirror therapy (top-down) approaches. A BCI system is presented to classify the electroencephalography (EEG) evidence. In addition, a probabilistic model is presented to assist patients in transition across the experiment states based on their intent. To demonstrate the feasibility of the system, both offline and online analyses are performed for five healthy subjects. The experiment results show a promising performance for the system, with average accuracy of 94% in offline and 75% in online sessions.

Source: EEG-guided robotic mirror therapy system for lower limb rehabilitation – IEEE Conference Publication

, , , , , , ,

Leave a comment

[Abstract] The Combined Effects of Adaptive Control and Virtual Reality on Robot-Assisted Fine Hand Motion Rehabilitation in Chronic Stroke Patients: A Case Study

Robot-assisted therapy is regarded as an effective and reliable method for the delivery of highly repetitive training that is needed to trigger neuroplasticity following a stroke. However, the lack of fully adaptive assist-as-needed control of the robotic devices and an inadequate immersive virtual environment that can promote active participation during training are obstacles hindering the achievement of better training results with fewer training sessions required. This study thus focuses on these research gaps by combining these 2 key components into a rehabilitation system, with special attention on the rehabilitation of fine hand motion skills. The effectiveness of the proposed system is tested by conducting clinical trials on a chronic stroke patient and verified through clinical evaluation methods by measuring the key kinematic features such as active range of motion (ROM), finger strength, and velocity. By comparing the pretraining and post-training results, the study demonstrates that the proposed method can further enhance the effectiveness of fine hand motion rehabilitation training by improving finger ROM, strength, and coordination.

Source: The Combined Effects of Adaptive Control and Virtual Reality on Robot-Assisted Fine Hand Motion Rehabilitation in Chronic Stroke Patients: A Case Study

, , , , , , , , , , ,

Leave a comment

[ARTICLE] Post-stroke Rehabilitation Training with a Motor-Imagery-Based Brain-Computer Interface (BCI)-Controlled Hand Exoskeleton: A Randomized Controlled Multicenter Trial – Full Text

Repeated use of brain-computer interfaces (BCIs) providing contingent sensory feedback of brain activity was recently proposed as a rehabilitation approach to restore motor function after stroke or spinal cord lesions. However, there are only a few clinical studies that investigate feasibility and effectiveness of such an approach. Here we report on a placebo-controlled, multicenter clinical trial that investigated whether stroke survivors with severe upper limb (UL) paralysis benefit from 10 BCI training sessions each lasting up to 40 min. A total of 74 patients participated: median time since stroke is 8 months, 25 and 75% quartiles [3.0; 13.0]; median severity of UL paralysis is 4.5 points [0.0; 30.0] as measured by the Action Research Arm Test, ARAT, and 19.5 points [11.0; 40.0] as measured by the Fugl-Meyer Motor Assessment, FMMA. Patients in the BCI group (n = 55) performed motor imagery of opening their affected hand. Motor imagery-related brain electroencephalographic activity was translated into contingent hand exoskeleton-driven opening movements of the affected hand. In a control group (n = 19), hand exoskeleton-driven opening movements of the affected hand were independent of brain electroencephalographic activity. Evaluation of the UL clinical assessments indicated that both groups improved, but only the BCI group showed an improvement in the ARAT’s grasp score from 0 [0.0; 14.0] to 3.0 [0.0; 15.0] points (p < 0.01) and pinch scores from 0.0 [0.0; 7.0] to 1.0 [0.0; 12.0] points (p < 0.01). Upon training completion, 21.8% and 36.4% of the patients in the BCI group improved their ARAT and FMMA scores respectively. The corresponding numbers for the control group were 5.1% (ARAT) and 15.8% (FMMA). These results suggests that adding BCI control to exoskeleton-assisted physical therapy can improve post-stroke rehabilitation outcomes. Both maximum and mean values of the percentage of successfully decoded imagery-related EEG activity, were higher than chance level. A correlation between the classification accuracy and the improvement in the upper extremity function was found. An improvement of motor function was found for patients with different duration, severity and location of the stroke.

Introduction

Motor imagery (Page et al., 2001), or mental practice, attracted considerable interest as a potential neurorehabilitation technique improving motor recovery following stroke (Jackson et al., 2001). According to the Guidelines for adult stroke rehabilitation and recovery (Winstein et al., 2016), mental practice may proof beneficial as an adjunct to upper extremity rehabilitation services (Winstein et al., 2016). Several studies suggest that motor imagery can trigger neuroplasticity in ipsilesional motor cortical areas despite severe paralysis after stroke (Grosse-Wentrup et al., 2011Shih et al., 2012Mokienko et al., 2013bSoekadar et al., 2015).

The effect of motor imagery on motor function and neuroplasticity has been demonstrated in numerous neurophysiological studies in healthy subjects. Motor imagery has been shown to activate the primary motor cortex (M1) and brain structures involved in planning and control of voluntary movements (Shih et al., 2012Mokienko et al., 2013a,bFrolov et al., 2014). For example, it was shown that motor imagery of fist clenching reduces the excitation threshold of motor evoked potentials (MEP) elicited by transcranial magnetic stimulation (TMS) delivered to M1 (Mokienko et al., 2013b).

As motor imagery results in specific modulations of brain electroencephalographic (EEG) signals, e.g., sensorimotor rhythms (SMR) (Pfurtscheller and Aranibar, 1979), it can be used to voluntarily control an external device, e.g., a robot or exoskeleton using a brain-computer interface (BCI) (Nicolas-Alonso and Gomez-Gil, 2012). Such system allowing for voluntary control of an exoskeleton moving a paralyzed limb can be used as an assistive device restoring lost function (Maciejasz et al., 2014). Besides receiving visual feedback, the user receives haptic and kinesthetic feedback which is contingent upon the imagination of a specific movement.

Several BCI studies involving this type of haptic and kinesthetic feedback have demonstrated improvements in clinical parameters of post-stroke motor recovery (Ramos-Murguialday et al., 2013Ang et al., 20142015Ono et al., 2014). The number of subjects with post-stroke upper extremity paresis included in these studies was, however, relatively low [from 12 (Ono et al., 2014) to 32 (Ramos-Murguialday et al., 2013) patients]. As BCI-driven external devices, a haptic knob (Ang et al., 2014), MIT-Manus (Ang et al., 2015), or a custom-made orthotic device (Ramos-Murguialday et al., 2013Ono et al., 2014) were used. Furthermore, several other studies reported on using BCI-driven exoskeletons in patients with post-stroke hand paresis (Biryukova et al., 2016Kotov et al., 2016Mokienko et al., 2016), but these reports did not test for clinical efficacy and did not include a control group. While very promising, it still remains unclear whether BCI training is an effective tool to facilitate motor recovery after stroke or other lesions of the central nervous system (CNS) (Teo and Chew, 2014).

Here we report a randomized and controlled multicenter study investigating whether 10 sessions of BCI-controlled hand-exoskeleton active training after subacute and chronic stroke yields a better clinical outcome than 10 sessions in which hand-exoskeleton induced passive movements were not controlled by motor imagery-related modulations of brain activity. Besides assessing the effect of BCI training on clinical scores such as the ARAT and FMMA, we tested whether improvements in the upper extremity function correlates with the patient’s ability to generate motor imagery-related modulations of EEG activity.[…]

Continue —> Frontiers | Post-stroke Rehabilitation Training with a Motor-Imagery-Based Brain-Computer Interface (BCI)-Controlled Hand Exoskeleton: A Randomized Controlled Multicenter Trial | Neuroscience

 

Figure 1. The subject flow diagram from recruitment through analysis (Consolidated Standards of Reporting Trials flow diagram).

, , , , , ,

Leave a comment

[VIDEO] Pablo Product Film – YouTube

 

Δημοσιεύτηκε στις 18 Ιουλ 2017

The PABLO is the latest in a long row of clinically tried and tested robotic- and computer-assisted therapy devices for arms and hands. The new design and the specially developed tyroS software make the PABLO more flexible and offer an expanded spectrum of therapy options.

 

, , , , , , , ,

Leave a comment

[Editorial] Robotics in Biomedical and Healthcare Engineering – Journal of Healthcare Engineering

The rapid progress of robotic technique provides new opportunities
for the biomedical and healthcare engineering. For
instance, a micro-nano robot allows us to study the fundamental
problems at cellular scale owing to its precise
positioning and manipulation ability; the medical robot
paves a new way for the low invasive and high efficient clinical
operation; and rehabilitation robot is able to improve the
rehabilitative efficacy of patients. This special issue aims at
exhibiting the latest research achievements, findings, and
ideas in the field of robotics in biomedical and healthcare
engineering, especially focusing on the upper/lower limb
rehabilitation, walking assistive robot, telerobotic surgery,
and radiosurgery.

Currently, there is an increasing population of patients
suffering from limb motor dysfunction, which can be caused
by nerve injuries associated with stroke, traumatic brain
injury, or multiple sclerosis. Past studies have demonstrated
that highly repetitive movement training can result in
improved recovery. The robotic-assisted technique is a novel
and rapidly expanding technology in upper/lower limb rehabilitation
that can enhance the recovery process and facilitate
the restoration of physical function by delivering high-dose
and high-intensity training. This special issue covers several
interesting papers addressing these challenges. X. Tu and
coworkers introduced an upper limb rehabilitation robot
powered by pneumatic artificial muscles which cooperates
with functional electrical stimulation arrays to realize active
reach-to-grasp training for stroke patients. The dynamic
models of a pneumatic muscle and functional electrical
stimulation-induced muscle are built for reaching training.
By using surface electromyography, the subject’s active intent
can be identified. Finally, grasping and releasing behaviors
can be realized by functional electrical stimulation array electrodes.
C. Guo and coworkers proposed an impedance-based
iterative learning control method to analyze the squatting
training of stroke patients in the iterative domain and time
domain. Patient’s training trajectory can be corrected by integrating
the iterative learning control scheme with the value of
impedance. In addition, the method can gradually improve
the performance of trajectory tracking by learning the past
trajectory tracking information and obtain specific training
condition of different individuals. The paper demonstrated
an effective control methodology in dealing with repeated
tracking control problems or periodic disturbance rejection
problems. Apart from these works, J. Li and coworkers
designed an open-structured treadmill gait trainer for lower
limb rehabilitation; T. Sun and coworkers proposed a
method for detecting the motion of human lower limbs
including all degrees of freedoms via the inertial sensors,
which permits analyzing the motion ability according to the
rehabilitation needs.

Other biomedical and healthcare robots included in this
special issue cover a range of interesting topics, such as walking
assistive robot, telerobotic surgery, and radiosurgery. To improve the walking ability of the elderly, the walker-type
rehabilitation robot has become a popular research topic over
the last decade. C. Tao and coworkers proposed a hierarchical
shared control method of the walking-aid robot for both
human motion intention recognition and the obstacle
emergency-avoidance method based on the artificial potential
field. In the implementation, the human motion intention
is obtained from the interaction force measurements of
the sensory system composed of force sensing registers and
a torque sensor. Meanwhile, a laser-range finder forward is
applied to detect the obstacles and try to guide the operator
based on the repulsion force calculated by artificial potential
field. The robot realizes obstacle avoidance while keeping
partially the operators’ original walking intention. X. Li and
coworkers demonstrated a general framework for robotassisted
surgical simulators for a more robust and resilient
robotic surgery. They created a hardware-in-the-loop simulator
platform and integrated the simulator with a physics
engine and a state-of-the-art path planning algorithm to help
surgeons acquire an optimal sense of manipulating the robot
instrumental arm. Eventually, they achieved autonomous
motion of the surgical robot. For coping with the workspace
issue during the application of Linac system during radiosurgery,
a specialized robotic system was presented by Y. Noh
et al. The design and implementation of the robotic system
were elaborated. All of these works showed comparative
advantages versus classical approaches and will hold great
potential for providing insights on the practical and systematic
design of robots that serve for broad applications in
biomedical and healthcare engineering.

The objectives of the special issue were reached in terms
of advancing the state of the art of robotic techniques and
addresing the challenging problems in biomedical and
healthcare engineering. Several critical problems in these
areas were addressed, and most of the proposed contributions
showed very promising results that outperform existing
studies. Some of the proposed approaches were also validated
from patients’ perspectives, which show the applicability of
these techniques in realistic environments.

Acknowledgments
We would like to express our thanks to all the authors who
submitted their work to this special issue and to all the
reviewers who helped us ensure the quality.

Chengzhi Hu
Qing Shi
Lianqing Liu
Uche Wejinya
Yasuhisa Hasegawa
Yajing Shen

, ,

Leave a comment

[Abstract+References] Impact of commercial sensors in human computer interaction: a review

Abstract

Nowadays, the communication gap between humans and computers might be reduced due to multimodal sensors available in the market. Therefore, it is important to know the specifications of these sensors and how they are being used in order to create human computer interfaces, which tackle complex tasks. The purpose of this paper is to review recent research regarding the up-to-date application areas of the following sensors:

(1) Emotiv sensor, which identifies emotions, facial expressions, thoughts, and head movements from users through electroencephalography signals,

(2) Leap motion controller, which recognizes hand and arm movements via vision techniques,

(3) Myo armband, which identifies hand and arm movements using electromyography signals and inertial sensors, and

(4) Oculus rift, which provides immersion into virtual reality to users.

The application areas discussed in this manuscript go from assistive technology to virtual tours. Finally, a brief discussion regarding advantages and shortcomings of each sensor is presented.

References

  1. Abreu JG, Teixeira JM, Figueiredo LS, Teichrieb V (2016) Evaluating sign language recognition using the myo armband. In: Virtual and augmented reality (SVR), 2016 XVIII symposium on, IEEE, pp 64–70Google Scholar
  2. Bassily D, Georgoulas C, Guettler J, Linner T, Bock T (2014) Intuitive and adaptive robotic arm manipulation using the Leap motion controller. In: ISR/Robotik 2014; 41st international symposium on robotics; proceedings of, VDE, pp 1–7Google Scholar
  3. Bernardos AM, Sánchez JM, Portillo JI, Wang X, Besada JA, Casar JR (2016) Design and deployment of a contactless hand-shape identification system for smart spaces. J Ambient Intell Humaniz Comput 7(3):357–370CrossRefGoogle Scholar
  4. Blaha J, Gupta M (2014) Diplopia: A virtual reality game designed to help amblyopics. In: Virtual reality (VR), 2014 iEEE, IEEE, pp 163–164Google Scholar
  5. Boschmann A, Dosen S, Werner A, Raies A, Farina D (2016) A novel immersive augmented reality system for prosthesis training and assessment. In: Biomedical and health informatics (BHI), 2016 IEEE-EMBS international conference on, IEEE, pp 280–283Google Scholar
  6. Brennan CP, McCullagh PJ, Galway L, Lightbody G (2015) Promoting autonomy in a smart home environment with a smarter interface. In: Engineering in medicine and biology society (EMBC), 2015 37th annual international conference of the IEEE, IEEE, pp 5032–5035Google Scholar
  7. Cacace J, Finzi A, Lippiello V, Furci M, Mimmo N, Marconi L (2016) A control architecture for multiple drones operated via multimodal interaction in search & rescue mission. In: Safety, security, and rescue robotics (SSRR), 2016 IEEE international symposium on, IEEE, pp 233–239Google Scholar
  8. Carrino F, Tscherrig J, Mugellini E, Khaled OA, Ingold R (2011) Head-computer interface: a multimodal approach to navigate through real and virtual worlds. In: International conference on human-computer interaction, Springer, pp 222–230Google Scholar
  9. Charles D, Pedlow K, McDonough S, Shek K, Charles T (2014) Close range depth sensing cameras for virtual reality based hand rehabilitation. J Assist Technol 8(3):138–149CrossRefGoogle Scholar
  10. Chuan CH, Regina E, Guardino C (2014) American sign language recognition using Leap motion sensor. In: Machine learning and applications (ICMLA), 2014 13th international conference on, IEEE, pp 541–544Google Scholar
  11. Ciolan IM, Buraga SC, Dafinoiu I (2016) Oculus rift 3D interaction and nicotine craving: results from a pilot study. In: ROCHI–international conference on human-computer interaction, p 58Google Scholar
  12. Da Gama A, Fallavollita P, Teichrieb V, Navab N (2015) Motor rehabilitation using Kinect: a systematic review. Games Health J 4(2):123–135CrossRefGoogle Scholar
  13. dos Reis Alves SF, Uribe-Quevedo AJ, da Silva IN, Ferasoli Filho H (2014) Pomodoro, a mobile robot platform for hand motion exercising. In: Biomedical robotics and biomechatronics 2014 5th IEEE RAS & EMBS international conference on, IEEE, pp 970–974Google Scholar
  14. Duvinage M, Castermans T, Petieau M, Hoellinger T, Cheron G, Dutoit T (2013) Performance of the emotiv epoc headset for P300-based applications. Biomed Eng Online 12(1):56CrossRefGoogle Scholar
  15. Farahani N, Post R, Duboy J, Ahmed I, Kolowitz BJ, Krinchai T, Monaco SE, Fine JL, Hartman DJ, Pantanowitz L (2016) Exploring virtual reality technology and the Oculus rift for the examination of digital pathology slides. J Pathol Inform 7Google Scholar
  16. Fiałek S, Liarokapis F (2016) Comparing two commercial brain computer interfaces for serious games and virtual environments. In: Karpouzis K, Yannakakis GN (eds) Emotion in games, Springer, Switzerland, pp 103–117Google Scholar
  17. Funasaka M, Ishikawa Y, Takata M, Joe K (2015) Sign language recognition using Leap motion controller. In: Proceedings of the international conference on parallel and distributed processing techniques and applications (PDPTA), the steering committee of the world congress in computer science, computer engineering and applied computing (WorldComp), p 263Google Scholar
  18. Gándara CV, Bauza CG (2015) Intellihome: a framework for the development of ambient assisted living applications based in low-cost technology. In: Proceedings of the Latin American conference on human computer interaction, ACM, p 18Google Scholar
  19. Gomez-Gil J, San-Jose-Gonzalez I, Nicolas-Alonso LF, Alonso-Garcia S (2011) Steering a tractor by means of an EMG-based human-machine interface. Sensors 11(7):7110–7126CrossRefGoogle Scholar
  20. Gonzalez-Sanchez J, Chavez-Echeagaray ME, Atkinson R, Burleson W (2011) Abe: an agent-based software architecture for a multimodal emotion recognition framework. In: Software architecture (WICSA), 2011 9th working IEEE/IFIP conference on, IEEE, pp 187–193Google Scholar
  21. Grubišić I, Skala Kavanagh H, Grazio S (2015) Novel approaches in hand rehabilitation. Period Biol 117(1):139–145Google Scholar
  22. Guna J, Jakus G, Pogačnik M, Tomažič S, Sodnik J (2014) An analysis of the precision and reliability of the Leap motion sensor and its suitability for static and dynamic tracking. Sensors 14(2):3702–3720CrossRefGoogle Scholar
  23. Gunasekera WL, Bendall J (2005) Rehabilitation of neurologically injured patients. In: Moore AJ, Newell DW (eds) Neurosurgery, Springer, London, pp 407–421Google Scholar
  24. Güttler J, Shah R, Georgoulas C, Bock T (2015) Unobtrusive tremor detection and measurement via human-machine interaction. Proced Comput Sci 63:467–474CrossRefGoogle Scholar
  25. Han J, Shao L, Xu D, Shotton J (2013) Enhanced computer vision with Microsoft Kinect sensor: a review. IEEE Trans Cybern 43(5):1318–1334CrossRefGoogle Scholar
  26. Hettig J, Mewes A, Riabikin O, Skalej M, Preim B, Hansen C (2015) Exploration of 3D medical image data for interventional radiology using myoelectric gesture control. In: Proceedings of the eurographics workshop on visual computing for biology and medicine, eurographics association, pp 177–185Google Scholar
  27. Ijjada MS, Thapliyal H, Caban-Holt A, Arabnia HR (2015) Evaluation of wearable head set devices in older adult populations for research. In: Computational science and computational intelligence (CSCI), 2015 international conference on, IEEE, pp 810–811Google Scholar
  28. Jurcak V, Tsuzuki D, Dan I (2007) 10/20, 10/10, and 10/5 systems revisited: their validity as relative head-surface-based positioning systems. Neuroimage 34(4):1600–1611CrossRefGoogle Scholar
  29. Kefer K, Holzmann C, Findling RD (2016) Comparing the placement of two arm-worn devices for recognizing dynamic hand gestures. In: Proceedings of the 14th international conference on advances in mobile computing and multi media, ACM, pp 99–104Google Scholar
  30. Khademi M, Mousavi Hondori H, McKenzie A, Dodakian L, Lopes CV, Cramer SC (2014) Free-hand interaction with Leap motion controller for stroke rehabilitation. In: Proceedings of the extended abstracts of the 32nd annual ACM conference on human factors in computing systems, ACM, pp 1663–1668Google Scholar
  31. Khan FR, Ong HF, Bahar N (2016) A sign language to text converter using Leap motion. Int J Adv Sci Eng Inf Technol 6(6):1089–1095Google Scholar
  32. Kim SY, Kim YY (2012) Mirror therapy for phantom limb pain. Korean J Pain 25(4):272–274CrossRefGoogle Scholar
  33. Kiorpes L, McKeet SP (1999) Neural mechanisms underlying amblyopia. Curr Opin Neurobiol 9(4):480–486CrossRefGoogle Scholar
  34. Kleven NF, Prasolova-Førland E, Fominykh M, Hansen A, Rasmussen G, Sagberg LM, Lindseth F (2014) Training nurses and educating the public using a virtual operating room with Oculus rift. In: Virtual systems & multimedia (VSMM), 2014 international conference on, IEEE, pp 206–213Google Scholar
  35. Kutafina E, Laukamp D, Bettermann R, Schroeder U, Jonas SM (2016) Wearable sensors for elearning of manual tasks: Using forearm emg in hand hygiene training. Sensors 16(8):1221CrossRefGoogle Scholar
  36. Li C, Rusak Z, Horvath I, Kooijman A, Ji L (2016) Implementation and validation of engagement monitoring in an engagement enhancing rehabilitation system. IEEE Trans Neural Syst Rehabil Eng 25(6):726–738CrossRefGoogle Scholar
  37. Li C, Yang C, Wan J, Annamalai AS, Cangelosi A (2017) Teleoperation control of baxter robot using kalman filter-based sensor fusion. Syst Sci Control Eng 5(1):156–167CrossRefGoogle Scholar
  38. Liarokapis F, Debattista K, Vourvopoulos A, Petridis P, Ene A (2014) Comparing interaction techniques for serious games through brain-computer interfaces: a user perception evaluation study. Entertain Comput 5(4):391–399CrossRefGoogle Scholar
  39. Lupu RG, Ungureanu F, Stan A (2016) A virtual reality system for post stroke recovery. In: System theory, control and computing (ICSTCC), 2016 20th international conference on, IEEE, pp 300–305Google Scholar
  40. Marin G, Dominio F, Zanuttigh P (2014) Hand gesture recognition with Leap motion and Kinect devices. In: Image processing (ICIP), 2014 IEEE international conference on, IEEE, pp 1565–1569Google Scholar
  41. McCullough M, Xu H, Michelson J, Jackoski M, Pease W, Cobb W, Kalescky W, Ladd J, Williams B (2015) Myo arm: swinging to explore a VE. In: Proceedings of the ACM SIGGRAPH symposium on applied perception, ACM, pp 107–113Google Scholar
  42. Mewes A, Saalfeld P, Riabikin O, Skalej M, Hansen C (2016) A gesture-controlled projection display for CT-guided interventions. Int J Comput Assist Radiol Surg 11(1):157–164CrossRefGoogle Scholar
  43. Mousavi Hondori H, Khademi M (2014) A review on technical and clinical impact of Microsoft Kinect on physical therapy and rehabilitation. J Med Eng 2014. doi:10.1155/2014/846514
  44. Nicola Bizzotto M, Alessandro Costanzo M, Leonardo Bizzotto M (2014) Leap motion gesture control with osirix in the operating room to control imaging: first experiences during live surgery. Surg Innov 1:2Google Scholar
  45. Nugraha BT, Sarno R, Asfani DA, Igasaki T, Munawar MN (2016) Classification of driver fatigue state based on EEG using Emotiv EPOC+. J Theor Appl Inf Technol 86(3):347Google Scholar
  46. Oskoei MA, Hu H (2007) Myoelectric control systems: a survey. Biomed Sign Process Control 2(4):275–294CrossRefGoogle Scholar
  47. Palmisano S, Mursic R, Kim J (2017) Vection and cybersickness generated by head-and-display motion in the Oculus rift. Displays 46:1–8CrossRefGoogle Scholar
  48. Phelan I, Arden M, Garcia C, Roast C (2015) Exploring virtual reality and prosthetic training. In: Virtual reality (VR), 2015 IEEE, IEEE, pp 353–354Google Scholar
  49. Powell C, Hatt SR (2009) Vision screening for amblyopia in childhood. Cochrane Database Syst Rev. doi:10.1002/14651858.CD005020.pub3
  50. Qamar A, Rahman MA, Basalamah S (2014) Adding inverse kinematics for providing live feedback in a serious game-based rehabilitation system. In: Intelligent systems, modelling and simulation (ISMS), 2014 5th international conference on, IEEE, pp 215–220Google Scholar
  51. Qamar AM, Khan AR, Husain SO, Rahman MA, Baslamah S (2015) A multi-sensory gesture-based occupational therapy environment for controlling home appliances. In: Proceedings of the 5th ACM on international conference on multimedia retrieval, ACM, pp 671–674Google Scholar
  52. Quesada L, López G, Guerrero L (2017) Automatic recognition of the american sign language fingerspelling alphabet to assist people living with speech or hearing impairments. J Ambient Intell Humaniz Comput 8(4):625–635Google Scholar
  53. Ramachandran VS, Rogers-Ramachandran D (2008) Sensations referred to a patient’s phantom arm from another subjects intact arm: perceptual correlates of mirror neurons. Med Hypotheses 70(6):1233–1234CrossRefGoogle Scholar
  54. Ranky G, Adamovich S (2010) Analysis of a commercial EEG device for the control of a robot arm. In: Bioengineering conference, proceedings of the 2010 IEEE 36th annual northeast, IEEE, pp 1–2Google Scholar
  55. Rautaray SS, Agrawal A (2015) Vision based hand gesture recognition for human computer interaction: a survey. Artif Intell Rev 43(1):1–54CrossRefGoogle Scholar
  56. Rechy-Ramirez EJ, Hu H (2014) A flexible bio-signal based HMI for hands-free control of an electric powered wheelchair. Int J Artif Life Res (IJALR) 4(1):59–76CrossRefGoogle Scholar
  57. Simoens P, De Coninck E, Vervust T, Van Wijmeersch JF, Ingelbinck T, Verbelen T, Op de Beeck M, Dhoedt B (2014) Vision: smart home control with head-mounted sensors for vision and brain activity. In: Proceedings of the fifth international workshop on Mobile cloud computing & services, ACM, pp 29–33Google Scholar
  58. Snow PW, Loureiro RC, Comley R (2014) Design of a robotic sensorimotor system for phantom limb pain rehabilitation. In: Biomedical robotics and biomechatronics 2014 5th IEEE RAS & EMBS international conference on, IEEE, pp 120–125Google Scholar
  59. Sonntag D, Orlosky J, Weber M, Gu Y, Sosnovsky S, Toyama T, Toosi EN (2015) Cognitive monitoring via eye tracking in virtual reality pedestrian environments. In: Proceedings of the 4th international symposium on pervasive displays, ACM, pp 269–270Google Scholar
  60. Subha DP, Joseph PK, Acharya R, Lim CM (2010) EEG signal analysis: a survey. J Med Syst 34(2):195–212CrossRefGoogle Scholar
  61. Toutountzi T, Collander C, Phan S, Makedon F (2016) Eyeon: An activity recognition system using myo armband. In: Proceedings of the 9th ACM international conference on PErvasive technologies related to assistive environments, ACM, p 82Google Scholar
  62. Verkijika SF, De Wet L (2015) Using a brain-computer interface (BCI) in reducing math anxiety: evidence from South Africa. Comput Educ 81:113–122CrossRefGoogle Scholar
  63. Vikram S, Li L, Russell S (2013) Handwriting and gestures in the air, recognizing on the fly. Proc CHI 13:1179–1184Google Scholar
  64. Villagrasa S, Fonseca D, Durán J (2014) Teaching case: applying gamification techniques and virtual reality for learning building engineering 3D arts. In: Proceedings of the second international conference on technological ecosystems for enhancing multiculturality, ACM, pp 171–177Google Scholar
  65. Wake N, Sano Y, Oya R, Sumitani M, Kumagaya Si, Kuniyoshi Y (2015) Multimodal virtual reality platform for the rehabilitation of phantom limb pain. In: Neural engineering (NER), 2015 7th international IEEE/EMBS conference on, IEEE, pp 787–790Google Scholar
  66. Webel S, Olbrich M, Franke T, Keil J (2013) Immersive experience of current and ancient reconstructed cultural attractions. In: Digital heritage international congress (DigitalHeritage), 2013, IEEE, vol 1, pp 395–398Google Scholar
  67. Webster D, Celik O (2014) Systematic review of Kinect applications in elderly care and stroke rehabilitation. J Neuroeng Rehabil 11(1):108CrossRefGoogle Scholar
  68. Weichert F, Bachmann D, Rudak B, Fisseler D (2013) Analysis of the accuracy and robustness of the Leap motion controller. Sensors 13(5):6380–6393CrossRefGoogle Scholar
  69. Weisz J, Shababo B, Dong L, Allen PK (2013) Grasping with your face. In: Desai JP, Dudek G, Khatib O, Kumar V (eds) Experimental robotics, Springer, Heidelberg, pp 435–448Google Scholar
  70. Yu N, Xu C, Wang K, Yang Z, Liu J (2015) Gesture-based telemanipulation of a humanoid robot for home service tasks. In: Cyber technology in automation, control, and intelligent systems (CYBER), 2015 IEEE international conference on, IEEE, pp 1923–1927Google Scholar
  71. Zecca M, Micera S, Carrozza MC, Dario P (2002) Control of multifunctional prosthetic hands by processing the electromyographic signal. Crit Rev Biomed Eng 30:4–6CrossRefGoogle Scholar
  72. Zyda M (2005) From visual simulation to virtual reality to games. Computer 38(9):25–32CrossRefGoogle Scholar

Source: Impact of commercial sensors in human computer interaction: a review | SpringerLink

, , , , , ,

Leave a comment

[ARTICLE] A novel generation of wearable supernumerary robotic fingers to compensate the missing grasping abilities in hemiparetic upper limb – Full Text PDF

Abstract

This contribution will focus on the design, analysis, fabrication, experimental characterization and evaluation of a family of prototypes of robotic extra fingers that can be used as grasp compensatory devices for hemiparetic upper limb.

The devices are the results of experimental sessions with chronic stroke patients and consultations with clinical experts. All the devices share a common principle of work which consists in opposing to the paretic hand/wrist so to restrain the motion of an object.

Robotic supernumerary fingers can be used by chronic stroke patients to compensate for grasping in several Activities of Daily Living (ADL) with a particular focus on bimanual tasks.

The devices are designed to be extremely portable and wearable. They can be wrapped as bracelets when not being used, to further reduce the encumbrance. The motion of the robotic devices can be controlled using an Electromyography (EMG) based interface embedded in a cap. The interface allows the user to control the device motion by contracting the frontalis muscle. The performance characteristics of the devices have been measured through experimental set up and the shape adaptability has been confirmed by grasping various objects with different shapes. We tested the devices through qualitative experiments based on ADL involving a group of chronic stroke patients in collaboration with by the Rehabilitation Center of the Azienda Ospedaliera Universitaria Senese.

The prototypes successfully enabled the patients to complete various bi-manual tasks. Results show that the proposed robotic devices improve the autonomy of patients in ADL and allow them to complete tasks which were previously impossible to perform.

Full Text PDF

, , , , , , , , , , ,

Leave a comment

%d bloggers like this: