Breakthroughs, the newsletter of the Feinberg School of Medicine Research Office

March 2024 Newsletter

Using AI and Precision Rehabilitation Interventions to Measure and Improve Patient Function

Read the Q&A below

Faculty Profile

R. James Cotton, MD, PhD, ’20 GME, is assistant professor of Physical Medicine and Rehabilitation. He is a physician-scientist, with appointments at Feinberg and the Shirley Ryan AbilityLab. His research combines AI, wearable sensors, computer vision, causal and biomechanical modeling and novel technologies to develop novel methods that more precisely monitor and improve rehabilitation outcomes. 

What are your research interests? 

The research interests of my laboratory are driven by my clinical experience in rehabilitation medicine and revolve around improving long-term functional outcomes for our patients. This involves technology development, such as creating new wearable sensors to monitor function in the community. We are enthusiastic about the power of computer vision to analyze movements from easily accessible smartphone videos. To make either of these useful, we need to measure biomechanically grounded movements that are clinically meaningful, so we do a lot of deep learning algorithm design and training. This work has produced large amounts of data, which we are using to train a foundation model for gait analysis that can solve many different tasks and even align our computer understanding of movement with clinical descriptions. We are also applying causal inference to this data to learn what interventions result in better outcomes for individual patients. 

Another line of research in the laboratory is developing novel and accessible therapeutic approaches for improving function after spinal cord injury. This uses our wearable sensors to record electromyographic activity and provide this as biofeedback to the participants through games played on a smartphone or tablet. Patients report that this gamified electromyographic biofeedback therapy is fun and engaging, and we have found it promotes a high number of repetitions. Providing large volumes of therapy in an enjoyable way that people can perform independently is essential, as animal studies have shown the benefits of dosages orders of magnitude higher than we achieve in traditional rehabilitation.  

What is the goal of your research? 

We have two broad goals at different time scales: to use novel sensing methods and AI to better measure how people move, and then to use these measurements and the understanding they garner to improve function for our patients. These goals stem from the fact that in physical medicine and rehabilitation, a common treatment target is improving how people move and function in the world. In contrast to other medical disciplines with their labs and imaging, in current practice, we have very impoverished ways of measuring this.  

We have been developing a plurality of methods to capture movement, ranging from wearable sensors, analysis of video from a smartphone, and video from multiple synchronized cameras. These systems are designed from the ground up to easily integrate into routine clinical practice, and our hope is to be able to carefully characterize the movement impairments of all patients seen in inpatient and outpatient rehabilitation, and even at home. We believe these novel measurement methods with AI-powered analysis provide a rapid pathway to broad translation and dissemination. Our ultimate goal is to use these approaches to power precision rehabilitation interventions and improve the long-term functional outcomes of our rehabilitation patients. 

How did you become interested in this area of research? 

This research interest was a natural outgrowth of my clinical practice and has natural touchpoints to my graduate research in a visual neuroscience laboratory working at the intersection of neuroscience and AI. Specifically, I started working on wearable sensors through collaborations with John Rogers during my residency. I then began working on computer vision to make sense of this complicated sensor data as a resident who could not afford a motion capture system. This was motivated by thought leaders in rehabilitation emphasizing the importance of more regular and careful measurement of movement quality and impairments. My interest in applying causal inference to this data to enable precision rehabilitation arose after we had our approaches for measurement working but realized there was a clear gap in the field for how to use this data to provide targeted interventions and improve function in the real world. 

What types of collaborations are you engaged in across campus (and beyond)? 

Our tools make it much easier to measure clinically relevant features of movement, which is leading to numerous collaborations. We are working with Matty Major to characterize walking patterns in lower limb prosthesis users and identify gait features predictive of falls, and with Levi Hargrove to capture gait kinematics of participants testing novel prostheses being developed in Shirley Ryan AbilityLab’s Center for Bionic Medicine. We are also integrating our markerless motion capture system into more workflows at Shirley Ryan AbilityLab and working with many physical therapists to ensure the system is easy to use and improves the care they can provide.  

We are also working with Wendy Murray and Lee Miller on fitting biomechanical arm models to videos collected with our markerless motion capture system. I was the mentor on Colleen Peyton’s KL2, which has burgeoned into an ongoing collaboration characterizing movement patterns of infants to identify those at increased risk of movement impairments, such as cerebral palsy. Our portable biomechanics laboratory, which uses a smartphone app to acquire video and depth imaging simultaneously with wearable sensor measurements, is also being used in several collaborations including with the Human Longevity Lab and a remote collaboration in St. Louis.  

We are also working with teams at Stanford, Baylor College of Medicine, University of Göttingen, and University of Pittsburgh on developing new AI algorithms, including physics-based biomechanical simulation. I hope to continue strengthening connections with other departments at Northwestern including BME, EE and CS, as there are so many opportunities to leverage the data we are collecting to improve rehabilitation outcomes. 

How is your research funded? 

My research is supported by funding from the Craig H. Neilsen Foundation. I have also received support from the Stanford Restore Center and the American Neuromuscular Foundation. Much of this research is also generously funded through the Research Accelerator Program at Shirley Ryan AbilityLab. 

Where have you recently published papers? 

We recently had a paper on validating our portable biomechanics laboratory on lower limb prosthesis users that came out in Scientific Reports 

As an early career scientist, I have very much enjoyed publishing in the Proceedings of the Institute of Electronical and Electronic Engineers (IEEE). Their single-cycle review process has been essential in allowing me to continue making progress rather than spend time on revisions, and presenting at these meetings has been a great opportunity to meet people in the field as I transitioned from systems neuroscience. I was quite pleased that our work on biomechanical reconstruction from markerless motion capture received first prize at the International Conference on Rehabilitation Robotics (ICORR) this year, which had a nice symmetry as I also received a first prize poster award there as a resident from some earlier work.