The increasing demand for intense, task-specific neurorehabilitation following neurological conditions such as stroke and spinal cord injury has stimulated extensive research into rehabilitation technology over the last two decades.1,2 In particular, robotic devices have been developed to deliver a high dose of engaging repetitive therapy in a controlled manner, decrease the therapist’s workload and facilitate learning. Current evidence from clinical interventions using these rehabilitation robots generally show results comparable to intensity-matched, conventional, one-to-one training with a therapist.3–5 Assuming the correct movements are being trained, the primary factor driving this recovery appears to be the intensity of voluntary practice during robotic therapy rather than any other factor such as physical assistance required.6,7 Moreover, most existing robotic devices to train the upper limb (UL) tend to be bulky and expensive, raising further questions on the use of complex, motorised systems for neurorehabilitation.
Recently, simpler, non-actuated devices, equipped with sensors to measure patients’ movement or interaction, have been designed to provide performance feedback, motivation and coaching during training.8–12 Research in haptics13,14 and human motor control15,16 has shown how visual, auditory and haptic feedback can be used to induce learning of a skill in a virtual or real dynamic environment. For example, simple force sensors (or even electromyography) can be used to infer motion control17and provide feedback on the required and actual performances, which can allow subjects to learn a desired task. Therefore, an appropriate therapy regime using passive devices that provide essential and engaging feedback can enhance learning of improved arm and hand use.
Such passive sensor-based systems can be used for both impairment-based training (e.g. gripAble18) and task-oriented training (ToT) (e.g. AutoCITE8,9, ReJoyce11). ToT views the patient as an active problem-solver, focusing rehabilitation on the acquisition of skills for performance of meaningful and relevant tasks rather than on isolated remediation of impairments.19,20 ToT has proven to be beneficial for participants and is currently considered as a dominant and effective approach for training.20,21
Sensor-based systems are ideal for delivering task-oriented therapy in an automated and engaging fashion. For instance, the AutoCITE system is a workstation containing various instrumented devices for training some of the tasks used in constraint-induced movement therapy.8 The ReJoyce uses a passive manipulandum with a composite instrumented object having various functionally shaped components to allow sensing and training of gross and fine hand functions.11 Timmermans et al.22reported how stroke survivors can carry out ToT by using objects on a tabletop with inertial measurement units (IMU) to record their movement. However, this system does not include force sensors, critical in assessing motor function.
In all these systems, subjects perform tasks such as reach or object manipulation at the tabletop level, while receiving visual feedback from a monitor placed in front of them. This dislocation of the visual and haptic workspaces may affect the transfer of skills learned in this virtual environment to real-world tasks. Furthermore, there is little work on using these systems for the quantitative task-oriented assessment of functional tasks. One exception to this is the ReJoyce arm and hand function test (RAHFT)23 to quantitatively assess arm and hand function. However, the RAHFT primarily focuses on range-of-movement in different arm and hand functions and does not assess the movement quality, which is essential for skilled action.24–28
To address these limitations, this article introduces a novel, sensor-based System for Independent Task-Oriented Assessment and Rehabilitation (SITAR). The SITAR consists of an ecosystem of different modular devices capable of interacting with each other to provide an engaging interface with appropriate real-world context for both training and assessment of UL. The current realisation of the SITAR is an interactive tabletop with visual display as well as touch and force sensing capabilities and a set of intelligent objects. This system provides direct interaction with collocation of visual and haptic workspaces and a rich multisensory feedback through a mixed reality environment for neurorehabilitation.
The primary aim of this study is to present the SITAR concept, the current realisation of the system, together with preliminary data demonstrating the SITAR’s capabilities for UL assessment and training. The following section introduces the SITAR concept, providing the motivation and rationale for its design and specifications. Subsequently, we describe the current realisation of the SITAR, its different components and their capabilities. Finally, preliminary data from two pilot clinical studies are presented, which demonstrate the SITAR’s functionalities for ToT and assessment of the UL. […]