Abstract—Upper-limb robotic rehabilitation systems should inform the therapists for their patients status. Such therapy systems must be developed carefully by taking into consideration real life uncertainties that associate with sensor error. In our paper, we describe a system which is composed of a depth camera that tracks the motion of the patients upper limb, and a robotic manipulator that challenges the patient with repetitive exercises. The goal of this study is to propose a motion analysis system that improves the readings of the depth camera, through the use of a kinematic model that describes the motion of the human arm. In our current experimental set-up we are using the Kinect v2 to capture a participant who performs rehabilitation exercises with the Barrett WAM robotic manipulator. Finally, we provide a numerical comparison among the stand alone measurements from the Kinect v2, the estimated motion parameters of our system and the VICON, which we consider as an error-free ground truth apparatus.
It is generally accepted that the role of modern physical rehabilitation is essential for the enhancement or restoration of inherent or incidental motor skills disorders. Such disorders may result from a variety of different causes such as amputation, spinal cord injury, musculoskeletal impairment and even brain injury. In light of this phenomenon, robotic rehabilitation augments classical rehabilitation techniques, from the scope that adaptable robotic devices, such as mechanical manipulators, can be used to complement the training routines of a physiatrist or occupational therapist. In this paper we describe and evaluate a novel system that can be used by physicians and therapists to monitor the state of the upper limbs of a patient who performs exercises. The system emphasizes the use of the Microsoft Kinect v2 as opposed to wearable sensors, such as embedded accelerometers, gyroscopes and EMGs. In the following sections we present, analyze and evaluate the proposed system. Speciﬁcally, in section 2 we discuss how related studies manage to tackle the problem of pose estimation with vision based or wearable sensors. Furthermore, we discuss how our system exploits the kinematic formulas that originate from the area of robotic mechanics and describe the motion or rigid bodies that can be abstracted via a kinematic chain. We also illustrate an overview of the system, address its core processes and state certain assumptionsthatleadtothesystemsrealization.Asexpected, in the last sections of the paper we detail the physical experimental setup for the assessment of the system and we consider possible avenues for future work.