Researchers Seek to Build Confidence into AI for Healthcare Under NSF Grant

By Allison Logan

A team of researchers at the University of Florida will explore ways to increase trustworthiness and interpretability of artificial machine learning in healthcare under a new $1.2 million grant from the National Science Foundation. The team will also investigate ways to use AI to diagnose neurodegenerative diseases earlier.

The project will provide a paradigm shift for explainable AI, explaining how and why a machine learning model makes its prediction. Researchers hope to take a proof-based approach, “which probes all the hidden layers of a given model to identify critical layers and neurons involved in a prediction from a local point of view.” Researchers also plan to build a verification framework, where users can verify the model’s performance and explanations.

The UF research team is led by principal investigator My T. Thai, Ph.D., a professor in the Department of Computer & Information Science & Engineering, and co-principal investigators Ruogu Fang, Ph.D., an assistant professor in the J. Crayton Pruitt Family Department of Biomedical Engineering, and Adolfo Ramirez-Zamora, M.D., an associate professor in the Department of Neurology. UF is partnering with Carnegie Mellon University on the project.

“AI has become an essential part of the modern digital era, especially toward enhancing healthcare systems. Unfortunately, when AI makes headlines, all too often it is because of problems with biases, inexplicability and untrustworthiness,” said Dr. Thai, the associate director of the Warren B. Nelms Institute for the Connected World. “Now it is time for us to take a deeper look to make AI-based decisions more explainable, transparent and reliable. I am excited about this opportunity to lead a multidisciplinary team to conduct such fascinating research.”

Researchers hope this project will benefit a variety of high-impact AI-based applications such as image-based disease diagnostics or medical treatment recommendations in terms of their explainability, trustworthiness and verifiability.

“While AI research is making amazing strides toward enhancing healthcare systems, it can only make a real impact in medicine if the AI system is trustworthy and explainable,” Dr. Fang said, “not merely offering not only a 99.9% accuracy in diagnosing Alzheimer’s disease on a public dataset, for example, but also providing reliable diagnosis even for clinical novel cases in real life and transparent decision-making process. Our goal of this NSF-funded project is to build and verify such a trustworthy and explainable AI system for neurodegenerative disease early diagnosis.”

The researchers will develop computational theories, algorithms and prototype systems to make the machine-learning prediction process more transparent. Researchers plan to systematically study the trustworthiness of machine learning systems, which will be measured by “novel metrics such as, adversarial robustness and semantic saliency, and will be carried out to establish the theoretical basis and practical limits of trustworthiness of machine learning algorithms.”

“The ultimate goal of this project is to develop a non-invasive AI-based technique to early detect neurodegenerative diseases, one of the major causes of death among older people worldwide,” Dr. Thai said. “This can be done only if the AI-based decisions are reliable and transparent. Unfortunately, AI is prone to attacks and has been used as a black box. Thus, the very first essential step is to build an explainable and trustworthy AI. Once this principle is done, then clearly this project will have a significant impact to many application domains, not only in healthcare but to any application that use AI as its prediction.”

The results of this project will be assimilated into the courses and summer programs at UF that the research team has developed with specially designed projects to train students with trustworthy and explainable AI.