Physical therapy is crucial for rehabilitation following many different types of surgery and injury, but it is often severely hampered by lack of access to therapists and lack of adherence to home therapy regimens. Similarly, wellness training and ergonomics training can be crucial components of preventative medicine, but are often not availed of due to a lack of access to proper expertise and guidance. This project aims to develop a computer vision based mobile system that can help people with accurate physical therapy, fitness training and ergonomics, while letting the medical caregivers track progress and compliance of patients. Our proposed real-time monitoring and guidance system integrates expertise in seemingly disparate disciplines - computer vision, computer gaming, wireless networking, high-dimensional machine learning, and human factors - towards an integrated solution that holds great promise to transform physical therapy, fitness training and ergonomic training through a quantitative process that can be done at home or at the workplace. Fundamental advances in the core disciplines towards successful implementation of the integrated solution include new hand and body pose estimation and tracking algorithms that are robust to interactions between hands, rapid motion, and occlusions, and the development of machine learning and avatar rendering algorithms for sensor fusion and expert-trained guidance logic, for both cloud-based and local usage. The aim is to provide avatar based training and real-time visual guidance on mobile devices and virtual reality (VR) platforms like Oculus and Samsung Gear VR to enable end-users to enhance accuracy, effectiveness, and safety for therapy, fitness and ergonomics applications.
The above figure shows the architecture and data flow of our proposed interactive training and guidance system, which enables demonstration rendering of exercise activities using avatars, real-time tracking of the user's performance using sensors, and real-time guidance. This figure shows two proposed modes of operation: (i) local mode, for which the system resides on the local device, such as a tablet or Gear VR, and expert session updates must be downloaded before a new live session, and (ii) cloud mode (shaded), in which tasks are performed in the cloud, with the rendered video streamed to the local device. Sensor data are collected through a laptop or a Raspberry PI and transmitted over WiFi to the local device, which for cloud mode compresses and transmits the data to the cloud servers. Scalability and usability advantages are making the cloud the preferred platform for many applications. Cloud rendering will enable interactive training and guidance from any device, but may pose challenges in response time and network costs, so we will examine both modes.
The following demonstration video introduces the prototype of our system.
User Performance Evaluation and Real-Time Guidance
In the proposed system, two kinds of delay may cause challenge for correctly calculating the accuracy of the user's movement compared to the physical therapist (PT) avatar’s movement: human reaction delay (delay by user to follow avatar instructions/motion) and mobile network delay (which may delay when the cloud rendered avatar video reaches the user device). In particular, the delay may cause the motion sequences of the PT and the user to be misaligned with each other and make it difficult to judge whether the user is following the PT avatar correctly. To solve this problem, we propose a Gesture-Based Dynamic Time Warping algorithm that can segment the user motion sequence into gestures, and align and evaluate the gesture sub-sequences, all in real time. We develop an evaluation model to quantify user performance based on different criteria provided by the PT for a task, trained with offline subjective test data consisting of user performance and PT scores. To help the user improve performance accuracy, we propose a guidance system which highlights the user’s error and provides visual and textual guidance for the user.
Learning-based Task Recommendation System for Treatment of Patients with Parkinson’s Disease in a Physical Therapy Setting
Traditionally, PT recommends physical therapy tasks to patients based on their experience, sometimes with subjective bias. Besides, tasks and criteria cannot be updated timely before the patient’s next visit with the PT, even with patient’s significant progress. To enhance the treatment efficiency and help patients receive personalized task recommendation, we are developing a learning-based task recommendation system. For this work, we are focusing on patients with Parkinson's disease.
The overall treatment process for patients with Parkinson's disease can be divided into two stages. 1) Initial evaluation: patients perform some mini-tasks and the PT recommend initial tasks for patients based on their performance on those tasks. 2) Follow-up task recommendation: patients practice the tasks that PT recommended to him/her at home. When the patient visits the PT again, the PT will check the patient’s performance on the current tasks and modify the tasks/criteria accordingly.
Traditional procedure for treatment of patients with Parkinson's disease
Learning-based task recommendation system for patients with Parkinson's disease
Traditionally, tasks and criteria are modified manually by the PT. However, PT’s recommendation may be impacted by his/her subjective bias. Moreover, tasks and criteria cannot be updated timely before the patient’s next visit with the PT, even with patient’s significant progress. Also, the traditional approach necessitates patient-PT live sessions, which may limit participation by patients with insurance limitations and/or inability to travel to PT location. Therefore, we propose to develop an automated task recommendation model using machine learning techniques. This model will be trained offline using the performance data of multiple patients as well as the corresponding PT recommendations, and can be used by PTs to remotely update patient tasks.