Skip to content

PiRehab: Raspberry Pi-Based Remote Rehab Tool

    Team project for ECE 5725 (instructor: Joe Skovira)

    Team: Heather Kim, Jenny Wen, and Aratrika Ghatak

    Objective

    This project aims to provide at-home rehabilitation for edema patients whose access to inpatient therapy has been hindered during the pandemic. The means of at-home rehabilitation is enabled by an embedded system utilizing a Raspberry Pi 4, PiCamera, PiTFT screen, and RFID modules. Currently, common ways to treat edema are limited to pneumatic devices and manual edema mobilization (MEM), which require the active engagement of healthcare providers. In contrast to the current methods, the system we propose allows for a portable system where patients can monitor swelling of the hands, display exercise guides, and manage rehab exercises. While going through the course of rehabilitation, RFID tags will generate directories unique to each patient where data from daily rehab, in addition to photographs, are automatically uploaded to a server. To carry out the said procedure, the system utilizes (1) OpenCV access the camera and to approximate hand volume, (2) MediaPipe to monitor finger flexion ranges during rehabilitation, (3) RFID to generate rehab logs that detail each patient’s information, records of compliance, and rehab results, and (4) an automatic data upload to a server which allows therapists to access and monitor patient’s rehab as they please.

    Introduction

    During the pandemic, one-third of the population in the US experienced difficulties in accessing in-person therapeutic regimens. Worse is true for patients suffering from chronic edema, which manifests on a variety of body locations and requires frequent in-person interactions between healthcare providers and patients. Essential to edema therapy is manually mobilizing edematous fluid from the tip of hands to meta-carpal joints (MCP), which is inherently skin-to-skin and thus could be undesirable during the pandemic. In light of this, we aim to propose an embedded system to enable self-management of edema for the sake of patients, at the same time, enable remote patient management for therapists. To provide a self-contained therapeutic regimen, the device primarily utilizes a Raspberry Pi 4, PiCamera, RFID module, and PiTFT. Patients can navigate menus on PiTFT and carry out the following activities: (1) monitor swelling by scanning the hand, (2) display guided rehabilitation exercises, and (3) check on the flexibility of fingers while exercising. All patient data is automatically uploaded to a server for therapists to access asynchronously.

    Design and Testing

    To let patients comply with the at-home system, our goal was to eliminate complicated procedures in conventional therapy at clinics. The demanding process of volumetric measurement of the hand, for instance, is replaced with the scanning of hands through a camera. We use the 2D area of the hand as a proxy of volume. In addition, the measurement of flexion range, which is conventionally required to be measured manually, is carried out contactless utilizing MediaPipe. We utilized a PiTFT screen for intuitive interfacing between activities where patients have physical buttons to navigate. We detail the components of the system design below.

    OS Upgrade

    The first part of our project was importing the libraries we needed to use. Because we were using MediaPipe to measure finger flexibility, we had to upgrade our 32-bit OS to a 64-bit OS. This was done seamlessly using Raspberry Pi Imager.

    One part of the importing process that threw us in for a loop was calling ‘pip3 install’ instead of ‘sudo pip3 install’. The former installs packages for the local user, but the latter installs packages for the superuser. This is an important distinction because when running code with ‘sudo python3’, as we had to since we were incorporating the PiTFT, the Pi will look for imported packages in the /root folder and not the local user folder.

    PiTFT Interface

    The PiTFT is the primary interface in which patients navigate various functions. It leads patients to the following menus: (1) scanning hand volume, (2) assessing finger flexibility, and (3) finger rehabilitation guides. Within each menu, patients can go back and forth if they want to rescan their hands, record their rehabilitation session, or quit the session. The image below shows the menu screen that is displayed after the patient has scanned their badge.

    The initial four menu options were attached to different callback functions tied to the buttons on the PiTFT (which use GPIO pins 17, 22, 23, and 27). We were able to do this through the use of various flags within the callback functions. The first flag, show_level_1, signified the startup menu. The callback for the first button (GPIO 17) was to complete the hand volume scanning. Hence, if the show_level_1 flag was set already, meaning the TFT was on the initial menu screen, this button should be mapped to the hand volume scanning code. We first checked if show_level_1 was True, and then set hand_vol_scan to True, which triggered the hand volume scanning code in our while loop. We set up the other two menu items, for finger flexibility scanning and exercises, similarly. GPIO pin 27 was set up as a physical quit button.

    We used this system of setting different flags that corresponded to different parts of the code for the sub-menu items as well. For example, once the user had gone to the exercises screen, they had the option to pick an exercise using three external buttons. Based on the exercise they had chosen, they would be able to demo or practice the exercise. In order to map the same GPIO pins to different options, we checked the flags. So if exercise_1, exercise_2, or exercise_3 flags were set then the GPIO pin 11 was mapped to the play_picture flag. This was set to True and then triggered the process of displaying the demo picture. If GPIO 22 was clicked, and it was in the exercise menu, then the practice flag was set to True, and this triggered the code that would take a picture of the user doing the exercise.

    The main challenge of getting the TFT screen to work was being able to keep track of all the flags, and make sure to turn them off during certain parts of the code. For example, once the camera has taken the picture and written it to a file, we want to turn off the take_picture flag so it wouldn’t continuously retake the picture. This part of the project was visual, so we were able to verify the function of this part of the code simply by seeing whether the correct menu options were displayed based on the buttons that were pressed, and whether the images had been saved to the patient’s folder properly.

    We initially wanted to be able to include touchscreen capabilities on our PiTFT instead of using buttons. However, we were unable to complete the Wheezy downgrade and set the screen calibration successfully once we upgraded to the 64 bit OS. This caused the wrong screen coordinates to be printed out when we tested it out by tapping on the screen. Because of this, we decided to use the buttons to navigate menu options instead.

    Hand Volume Tracking through OpenCV

    It is crucial to monitor the patient’s hand volume and track volumetric changes in edema therapy. In practice, the measurement of hand volume change pre- and post-therapy is carried out by a volumeter in which patients submerge their hands in water and measure the amount of water displaced. The process can be demanding and thus require the active engagement of therapists. To streamline the process, we applied a skin mask to the picture taken by the camera (which thresholds the image using a rough upper and lower bound for skin color in the HSV color system), then calculated what percent of pixels of the image fell within the threshold. As an increase in hand volume correlates to an increase in the cross sectional area, our method serves as a simplified way to measure changes in the hand volume as patients go through therapy.

    We first wanted to make sure the image that we were taking using the camera was consistent, so we 3d printed a “photo booth” where we mounted the PiCamera and had a guided location for patients to position their hands. To obtain clear contour data, getting a good contrast between the hand and backdrop and setting an appropriate HSV range for skin color were important. Having a white background and a consistent amount of luminance in the booth helped obtain less noisy data.

    One challenge in getting mask images was determining the appropriate range of colors for skin tone. We did a lot of testing for this but ultimately determined that more image preprocessing needed to be done in order to more accurately estimate the area of the patient’s hand in the frame. However, the low accuracy may not be a big problem as long as it is consistent across different measurements, as it is in our setup with the mounting box and the flashlight, because it is more important to obtain relative measurements as opposed to absolute measurements as the patient goes through the rehab regimen.

    The figures below show the image that was taken, and the thresholded image that was created from it.

    Acquisition of Finger Flexion Range through MediaPipe

    We are able to acquire 21 hand landmarks by feeding MediaPipe a livestream of hand images from the PiCamera. These hand landmarks include the Distal Interphalangeal (DIP), Proximal Interphalangeal (PIP), and Metacarpophalangeal (MCP) joints. We were able to calculate the flexion range of the PIP joint on the index, middle, ring, and pinky fingers using this information. This was made possible by extracting the normalized vectors of the joints and taking the arccosine of the dot product.

    We decided to display all of this information on the PiTFT both for the patient to have a visual idea of what was being done in the background as well as a debugging tool for ourselves. We saw that MediaPipe was interpreting the location of the joints fairly accurately, and the angles that were calculated were reflective of the estimates we made.

    Running Exercises through OpenCV and PyGame

    For this part of the program, we initially wanted to play demo videos for various exercises for the patients to watch, then record videos of the patients doing those actions. Unfortunately, we were unable to record video using our main program even though small, isolated test programs worked with our setup. We could not figure out why this was the case, but we determined that displaying (through PyGame) and taking (through OpenCV) pictures were functionally the same because the provider only needs to see the patient attempting the simple exercises.

    RFID for Patient-Specific Data

    Each patient is able to record their data and log their rehab throughout therapy using an RFID tag. Patients will be given RFID tags as they start rehabilitation so they can tag the card for the Raspberry Pi to populate folders associated with each patient. All data is exported to folders titled with the patients’ unique IDs. The data associated with a unique patient ID will include photographs of their rehabilitation and logs of hand volume and angles of finger flexibility, and is also be made available for healthcare providers through a server.

    One hurdle we had to overcome with the RFID module was that it uses SPI to communicate with the Raspberry Pi, which the PiTFT screen also uses. We followed one particular StackExchange post carefully (linked in the References section) to make sure both devices could be used at the same time, which worked perfectly for us.

    Server Configuration

    All of the patient data gets uploaded to a server upon exit of the program. For this process to be automatic, we created a public and private ssh key pair using ssh-keygen, and saved the public key in the server’s .ssh/authorized_keys folder. We also saved the private key in the ssh folder of our local Pi. Importantly, as mentioned before while discussing importing packages, this had to be the ssh folder in /root instead of the normal user, because we were running our program using sudo. One more thing we had to do to be able to transfer files automatically was to make sure that the server was treated as a trusted host. We did this by running a small program whose only job was to do a scp transfer of a random file to the server and set StrictHostKeyChecking=no in that command. After doing this once, the server was permanently added as a trusted host, so we were able to transfer files afterward without answering any prompts on the command line.

    Results and Conclusion

    Our team was able to meet most of what the device needed to serve as an at-home rehabilitation tool. The program has a hand volume scanning feature, a finger flexibility scanning feature, and an exercise feature. We made the system entirely portable by running the program upon bootup of the Raspberry Pi. However, there still remain elements that could improve user experience. For one, image pre-processing could help improve the accuracy of the hand volume scanning feature. Another thing that could help with this is improving the design of the photo booth box contraption by including a hole in the top for a flashlight to shine through. Second, the device could provide more detailed rehab demos and exercises through videos if we in the future figure out how to play videos on PiTFT. Finally, recording the patients as they do the exercises through the camera was deterred by issues that we could not work around. While the current way of taking images does its job, obtaining live videos could help therapists assess how patients carry out exercises.

    Future Work

    One of the main things we would like to accomplish in the future is improving our hand contours. By completing some image preprocessing to the hand pictures, we would be able to get a much clear contour, which will be optimal in letting healthcare providers know the exact extent of swelling in a patient’s hand.

    Another area of improvement would be including patient identification information in the RFID implementation, i.e. writing their names on the tag or card, instead of just an arbitrary ID unique to each patient. This would be more practical for real-life applications.

    Ideally, we would also like to be able to save videos of patients completing the exercises. However, we were unable to do this because of issues with OpenCV. Hopefully, we will be able to find a workaround in the future for this and be able to save videos for healthcare providers to view.