Posts Tagged Medical Robots

[ARTICLE] Hand exoskeleton for rehabilitation therapies with integrated optical force sensor – Full Text

This article presents the design of a hand exoskeleton that features its modularity and the possibility of integrating a force sensor in its frame. The modularity is achieved by dividing the exoskeleton in separate units, each one driving a finger or pair of them. These units or “finger modules” have a single degree of freedom and may be easily attached or removed from the robot frame and human fingers by snap-in fixations. As for the force sensing capability, the device relies on a novel force sensor that uses optical elements to amplify and measure small elastic deformations in the robot structure. This sensor can be fully integrated as a structural element of the finger module. The proposed technology has been validated in two experimental sessions. A first study was performed in a clinical environment in order to check whether the hand exoskeleton (without the integrated force sensor) can successfully move an impaired hand in a “Mirror Therapy” environment. A second study was carried with healthy subjects to check the technical feasibility of using the integrated force sensor as a human–machine interface.

A wide diversity of robotic devices, which can actuate/assist the movements of the human hand, can be found in the current scientific literature.1 Depending on the application, a hand exoskeleton may require uneven features. For example, a rehabilitation-aimed exoskeleton needs to be fairly backdrivable and allows a wide range of movement, so it is flexible enough to perform different rehabilitation exercises.2 In contrast, an assistance exoskeleton must be stiff enough to ensure a firm grasping of objects present during activities of daily living and can sacrifice flexibility of movement in favor of predefined grasping patterns.

These different requirements result on diverse force transmission architectures:

  • Some devices use linkages in order to transmit the force from the actuator to the human joints.35 This is a stiff architecture that requires a proper alignment between kinematic centers of the linkage and human joints, but allows a good control of the hand pose. Due to the flexibility of the design, with the correct sizing, these mechanisms can achieve complex movement patterns with simple actuators.
  • Another extended architecture is the cable-driven glove.68 These are more flexible and simpler alternatives that rely on the own human joints to direct the movement, so they are less prone to uncomfortable poses. In contrast, they require pulleys to achieve high forces and are harder to control in intermediate positions. Additionally, this kind of exoskeletons need a pair of cables in antagonist configuration in order to assist both extension and flexion movements.
  • Finally, some devices use deformable actuators, like pneumatic muscles or shape-memory alloys, attached directly to the hand by means of a glove.9,10 They result in very light and simple devices, but actuators are not placed in the most advantageous place to achieve great forces.

Regarding the exoskeletons based on linkages, especially those which rely on electric actuators, having a measurement of the interaction force between user and device may result an interesting feature in order to ease control tasks and improve safety. In certain devices, different sensor technologies have been implemented, such as torque sensors,11 strain gauges,12 flexion sensors,13 and miniature load cells.14 These sensors may be effective in their respective applications but present some shortcomings for their integration in exoskeletons. In particular, torque sensors measure loads in the motor shaft so, in over-constrained mechanisms, they might not measure all the interaction forces. Strain gauges are complex to fix in the proper place and shorter ones may not perform correctly, so for being usable they require geometries with size comparable to human phalanxes. Another miniature sensors, like load cells or force-sensitive resistors, normally can measure force in only one sense (compression or extension) and those that can measure both directions are too big for the scale of the human hand.

Research background and objectives

In our previous paper,15 we studied the feasibility of using multimodal systems in order to assist post-stroke patients during the execution of rehabilitation therapies with real objects. In this context, we evaluated the suitability of using a hand exoskeleton device,16 such as the aforementioned ones, for assisting an impaired person during the grasping of objects present in activities of daily living. This device has experienced substantial improvements with respect to the previous design in order to be able to interact safely with disabled users.

In that previous experimentation, the electromyographic (EMG) signal of the forearm muscles was proposed as a method to estimate user’s intention and consequently trigger the open/close movement of the hand exoskeleton. This method proved to be effective, but it can be used only for users with a coherent and relatively strong EMG signal, which might not be the case for most patients.17 From these results, there is a need for additional technologies that can detect the movement intention of the subject in order to cope with a wider range of user profiles.

Despite that the presented device will also be used in assistive context, the objective of the exposed research is to show whether the proposed improvements of the hand exoskeleton, including a miniature optical force sensor, allow its use in a real rehabilitation environment. Special attention will be given to the development of a force sensing method in order to measure the human–robot interaction forces and therefore to estimate user’s intention in rehabilitation scenarios.

Hand exoskeleton

Among the different existing architectures, we have decided to implement an exoskeleton based on the linkage approximation, since we consider that this is the most flexible solution in order to achieve a good compromise between the requirements of both rehabilitation and assistance scenarios. The motion transmission is based on a bar mechanism that allows the possibility of coupling the motion of phalanxes, so a natural hand movement is achievable using only one active degree of freedom per finger. Additionally, bars can transmit both tensile and compressive loads so the same mechanism is able to perform extension (most demanding movement in rehabilitation) and flexion (mandatory for assistance) movement of the fingers.

In detail, the designed exoskeleton is composed by three identical finger modules that drive index, middle and the pair formed by ring and little fingers. Each finger module has a single degree of freedom actively driven by a linear actuator. Unlike many of the referenced exoskeletons, due to the inherent uncertainty introduced by the human–exoskeleton interface (modeled as a slide along the phalanx longitudinal axis in Figure 1), we have decided not to rely on the human finger as the element that closes the kinematic chain. Conversely, we have adopted an approach similar to the one adopted by Ho et al.5 This way, adding a pair of circular guides whose centers are coincident with the joints of a reference finger, the mechanism is kinematically determinate without needing the human finger. Ho’s device uses slots with flange bearings to implement the guides; this may result effective but requires precision machining and miniature elements to achieve a compact solution. In contrast, we have designed a double-edged guide that slides between four V-shaped bearings (Figure 2). These elements allow the optimization of the required space and may be easily manufactured by prototyping technologies or plastic molding. To make up for the additional constraints, we have decided to actuate only medial and proximal phalanxes.

 

figure

Figure 1. Kinematics scheme of the finger linkage attached to the human finger. Metacarpophalangeal (MCP), proximal interphalangeal (PIP), and distal interphalangeal (DIP) joints have been modeled as revolute joints. Additionally, the interface between the module and the phalanxes has been modeled by means of slide.

 

figure

Figure 2. Left: Finger module represented in its extreme positions. Right: Detailed view of the designed circular guide to minimize mechanical clearances with minimum friction.

 

Continue —>  Hand exoskeleton for rehabilitation therapies with integrated optical force sensor – Jorge A Díez, Andrea Blanco, José María Catalán, Francisco J Badesa, Luis Daniel Lledó, Nicolas García-Aracil, 2018

, , , , , , , ,

Leave a comment

[ARTICLE] Multimodal robotic system for upper-limb rehabilitation in physical environment – Full Text HTML

Abstract

This article researches the feasibility of use of a multimodal robotic system for upper-limb neurorehabilitation therapies in physical environments, interacting with real objects. This system consists of an end-effector upper-limb rehabilitation robot, a hand exoskeleton, a gaze tracking system, an object tracking system, and electromyographic measuring units. For this purpose, the system architecture is stated, explaining the detailed functions of each subsystem as well as the interaction among them. Finally, an experimental scenario is designed to test the system with healthy subjects in order to check whether the system is suitable for future experiments with patients.

Introduction

The use of robotic systems in neurorehabilitation therapies may be justified because of its potential impact on better treatment and motor learning.1 For this reason, in the recent years, a wide variety of robotic devices for upper-limb neurorehabilitation have been developed by research groups around the world.211

In conjunction with these robotic devices, a wide range of robot-oriented rehabilitation interfaces and environments have been stated. Many of the current devices use virtual reality systems to set up the rehabilitation context;1217 and just few examples use physical environments.18,19 It should be pointed out that all these examples, except Badesa et al.’s14 work, use robotic exoskeletons.

Virtual reality systems are specially suitable for early stages of the disease,20 due to the flexibility that it offers when designing tasks and feedback stimuli, and the safety that it provides due to the absence of interaction with physical objects that can lead to injuries. However, in order to obtain a realistic interaction, it is necessary to use haptic devices,2124 which result in expensive and complex systems. In contrast, physical environments may suppose a good and inexpensive alternative to perform more complex, and functional, rehabilitation tasks in later stages of the disease, when patients have recovered some motor control of their upper limb.

The objective of this article is to check whether an end-effector rehabilitation robot25can be used to develop a fully functional multimodal rehabilitation system in physical environments. In contrast to Frisoli et al.’s19 work, the use of an end-effector robot instead of an exoskeleton is expected to result in a considerable reduction in the setup time as well as in an increase in user’s comfort. Additionally, the brain–computer interface (BCI) is replaced by electromyography, which does not require previous training, reducing user’s mental fatigue26 and saving additional time.

In this regard, the experimentation will focus on testing whether the mechanical system can be controlled with precision and safety enough to interact with some objects and perform a simple occupational therapy activity successfully, so that further researches in this path can be done.

Multimodal architecture

The starting point is an already designed upper-limb neurorehabilitation robot, which was conceived to deliver therapies in virtual reality environments, during the early stages after stroke.

In order to achieve the stated objectives, a multimodal architecture has been stated so that users can use a combination of their residual capabilities to perform the task. Among the possible remaining skills that patients may keep, eye movement and electromyographic (EMG) signals have been chosen for these tests.

Specifically, the designed system is composed of the following:

  • An object tracking system, which gives the position of the object that will be handled.

  • A gaze tracking device that will determine which object the patient is looking at.

  • EMG measuring units used as a trigger of several actions.

  • An end-effector rehabilitation robot that will assist the patient to perform reaching movements.

  • A hand exoskeleton for grasping the objects.

  • A computer that implements the high-level control (HLC) system, which will process and coordinate the signals of each device and will determine the control actions.

Communication and relationship between each element are stated in Figure 1.

Figure 1.

Figure 1. System architecture and communications between components.

Continue —> Multimodal robotic system for upper-limb rehabilitation in physical environment

, , , , , , , ,

Leave a comment

[ARTICLE] Multimodal robotic system for upper-limb rehabilitation in physical environment – Full Text HTML

Abstract

This article researches the feasibility of use of a multimodal robotic system for upper-limb neurorehabilitation therapies in physical environments, interacting with real objects. This system consists of an end-effector upper-limb rehabilitation robot, a hand exoskeleton, a gaze tracking system, an object tracking system, and electromyographic measuring units. For this purpose, the system architecture is stated, explaining the detailed functions of each subsystem as well as the interaction among them. Finally, an experimental scenario is designed to test the system with healthy subjects in order to check whether the system is suitable for future experiments with patients.

Introduction

The use of robotic systems in neurorehabilitation therapies may be justified because of its potential impact on better treatment and motor learning.1 For this reason, in the recent years, a wide variety of robotic devices for upper-limb neurorehabilitation have been developed by research groups around the world.211

In conjunction with these robotic devices, a wide range of robot-oriented rehabilitation interfaces and environments have been stated. Many of the current devices use virtual reality systems to set up the rehabilitation context;1217 and just few examples use physical environments.18,19 It should be pointed out that all these examples, except Badesa et al.’s14 work, use robotic exoskeletons.

Virtual reality systems are specially suitable for early stages of the disease,20 due to the flexibility that it offers when designing tasks and feedback stimuli, and the safety that it provides due to the absence of interaction with physical objects that can lead to injuries. However, in order to obtain a realistic interaction, it is necessary to use haptic devices,2124 which result in expensive and complex systems. In contrast, physical environments may suppose a good and inexpensive alternative to perform more complex, and functional, rehabilitation tasks in later stages of the disease, when patients have recovered some motor control of their upper limb.

The objective of this article is to check whether an end-effector rehabilitation robot25can be used to develop a fully functional multimodal rehabilitation system in physical environments. In contrast to Frisoli et al.’s19 work, the use of an end-effector robot instead of an exoskeleton is expected to result in a considerable reduction in the setup time as well as in an increase in user’s comfort. Additionally, the brain–computer interface (BCI) is replaced by electromyography, which does not require previous training, reducing user’s mental fatigue26 and saving additional time.

In this regard, the experimentation will focus on testing whether the mechanical system can be controlled with precision and safety enough to interact with some objects and perform a simple occupational therapy activity successfully, so that further researches in this path can be done.

 

Continue —> Multimodal robotic system for upper-limb rehabilitation in physical environment

 

Figure 1.

Figure 1. System architecture and communications between components.

 

Figure 2.

Figure 2. Difference between hand position and end-effector position with respect to the reference frame of the end-effector.

, , , ,

Leave a comment

[Abstract] The Soft-SixthFinger: a Wearable EMG Controlled Robotic Extra-Finger for Grasp Compensation in Chronic Stroke Patients. – IEEE Xplore

 

Abstract

This paper presents the Soft-SixthFinger, a wearable robotic extra-finger designed to be used by chronic stroke patients to compensate for the missing hand function of their paretic limb. The extra-finger is an underactuated modular structure worn on the paretic forearm by means of an elastic band. The device and the paretic hand/arm act like the two parts of a gripper working together to hold an object. The patient can control the flexion/extension of the robotic finger through the eCap, an Electromyography-based (EMG) interface embedded in a cap. The user can control the device by contracting the frontalis muscle. Such contraction can be achieved simply moving his or her eyebrows upwards. The Soft-SixthFinger has been designed as tool that can be used by chronic stroke patients to compensate for grasping in many Activities of Daily Living (ADL). It can be wrapped around the wrist and worn as a bracelet when not used. The light weight and the complete wireless connection with the EMG interface guarantee a high portability and wearability. We tested the device with qualitative experiments involving six chronic stroke patients. Results show that the proposed system significantly improves the performances of the patients in the proposed tests and, more in general, their autonomy in ADL.

Source: IEEE Xplore Abstract – The Soft-SixthFinger: a Wearable EMG Controlled Robotic Extra-Finger for Grasp Compensation in Chron…

, , , , , , , , ,

Leave a comment

%d bloggers like this: