Dr. Ruogu Fang, assistant professor, and UF Collaborators received a University of Florida Informatics Institute (Supporting Effective Educator Development) SEED Award for their research titled, “Multimodal Visual-Text Learning from Clinical Narrative and Image for Early Detection of Diabetic Retinopathy.”
Fang (Co-PI) and Dr. Yonghui Wu (PI), assistant professor in the Department of Health Outcomes & Biomedical Informatics, are investigating smart-phone based camera integrating clinical text and images as a means to screen diabetic retinopathy (DR).
Vision-threatening diseases are one of the leading causes of blindness. DR, a common complication of diabetes, is the leading cause of blindness in American adults and the fastest growing disease threatening nearly 415 million diabetic patients worldwide. With professional eye imaging devices such as fundus cameras or Optical Coherence Tomography (OCT) scanners, most of the vision-threatening diseases can be curable if detected early.
However, these diseases are still damaging people’s vision and leading to irreversible blindness, especially in rural areas and low-income communities where professional imaging devices and medical specialists are not available or not even affordable. There is an urgent need for early detection of vision-threatening diseases before vision loss in these areas.
The current practice of DR screening relies on human experts to manually examine and diagnose DR in stereoscopic color fundus photographs at hospitals using professional fundus camera, which is time-consuming and infeasible for large-scale screening. It also puts an enormous burden on ophthalmologists and increases waiting lists and may undermine the standards of health care. Therein, automatic DR diagnosis systems with ophthalmologist-level performance are a critical and unmet need for DR screening.
Electronic Health Records (EHR) have been increasingly implemented in US hospitals. Vast amounts of longitudinal patient data have been accumulated and are available electronically in structured tables, narrative text, and images. There is an increasing need for multimodal synergistic learning methods to link different data sources for clinical and translational studies.
Recent emerging of AI technologies, especially deep learning (DL) algorithms, have greatly improved the performance of automated vision-disease diagnosis systems based on EHR data. However, the current systems are unable to detect early stage of vision-diseases.
On the other hand, the clinical text provides detailed diagnosis, symptoms, and other critical observations documented by physicians, which could be a valuable resource to help lesion detection from medical images. Multimodal synergistic learning is the key to linking clinical text to medical images for lesion detection. This study proposes to leverage the narrative clinical text to improve lesion level detection from medical images via clinical Natural Language Processing (NLP).
The team hypothesizes that early stage vision-threatening diseases can be detected using smartphone-based fundus camera via multimodal learning integrating clinical text and images with limited lesion-level labels via clinical NLP. The ultimate goal is to improve the early detection and prevention of vision-threatening diseases among rural and low-income areas by developing a low-cost, highly efficient system that can leverage both clinical narratives and images.
The UF Informatics Institute (UFII) SEED Funds are internal awards that the Institute awards each year out to Principal Investigators (PI) across the UF campus. Their purpose is to support new collaboration among our expert faculty on campus, giving them the resources needed to gather new or additional data for their “shovel ready” projects with the hope of building a better standing for submission to external sponsors such as NSF and NIH. Through this program, the Informatics Institute works to fulfill its goal to develop new collaborative relationships among our faculty as well as bring more research dollars to further the University’s research initiative.