AI

New “AI Doctor” Predicts Hospital Readmission & Other Health Outcomes

Tool Designed to Smooth Hospital Operations for Better Patient Care

An artificial intelligence (AI) computer program can read physicians’ notes to accurately estimate patients’ risk of death, length of hospital stay, and other factors important to care. Designed by a team led by researchers at NYU Grossman School of Medicine, the tool is currently in use in its affiliated hospitals to predict the chances that a patient who is discharged will be readmitted within a month.

Experts have long explored computer algorithms meant to improve healthcare, with some having been shown to make valuable clinical predictions. However, few are in use because computers best process information laid out in neat tables, while physicians typically write in creative, individualized language that reflects how humans think.

Cumbersome data reorganization has been an obstacle, researchers say, but a new type of AI, large language models (LLM), can “learn” from text without needing specially formatted data.

In a study publishing online June 7 in the journal Nature, the research team designed an LLM called NYUTron that can be trained using unaltered text from electronic health records to make useful assessments about patient health status. The results revealed that the program could predict 80% of those who were readmitted, a roughly 5% improvement over a standard, non-LLM computer model that required reformatting of medical data.

“Our findings highlight the potential for using large language models to guide physicians about patient care,” said study lead author Lavender Jiang, BSc, a doctoral student at NYU’s Center for Data Science. “Programs like NYUTron can alert healthcare providers in real time about factors that might lead to readmission and other concerns so they can be swiftly addressed or even averted.”

Jiang adds that by automating basic tasks, the technology may speed up workflow and allow physicians to spend more time speaking with their patients.

Large language models use specialized computer algorithms to predict the best word to fill in a sentence based on how likely real people would use a particular term in that context. The more data used to “teach” the computer how to recognize such word patterns, the more accurate its guesses become over time, adds Jiang.

For their study, the researchers trained NYUTron using millions of clinical notes collected from the electronic health records of 336,000 men and women who had received care within the NYU Langone hospital system between January 2011 and May 2020. The resulting 4.1-billion-word language “cloud” included any record written by a doctor, such as radiology reports, patient progress notes, and discharge instructions. Notably, language was not standardized among physicians, and the program could even interpret abbreviations unique to a particular writer.

According to the findings, NYUTron identified 85% of those who died in the hospital (a 7% improvement over standard methods) and estimated 79% of patients’ actual length of stay (a 12% improvement over the standard model). The tool also successfully assessed the likelihood of additional conditions accompanying a primary disease (comorbidity index) as well as the chances of an insurance denial.

“These results demonstrate that large language models make the development of ‘smart hospitals’ not only a possibility, but a reality,” said study senior author and neurosurgeon Eric Oermann, MD. “Since NYUTron reads information taken directly from the electronic health record, its predictive models can be easily built and quickly implemented through the healthcare system.”

Oermann, an assistant professor in the Departments of Neurosurgery and Radiology at NYU Langone Health, adds that future studies may explore the model’s ability to extract billing codes, predict risk of infection, and identify the right medication to order, among other potential applications.

He cautions that NYUTron is a support tool for healthcare providers and is not intended as a replacement for provider judgment tailored to individual patients.

Funding for the study was provided by National Institutes of Health grants P30CA016087 and R01CA226527. Further funding was provided by the W.M. Keck Foundation Medical Research Grant.

Oermann is a consultant for Sofinnova Partners, a venture-capital firm that specializes in the development of biotechnologies, sustainability, and pharmaceuticals. He also has equity in Artisight Inc., a company that produces a software for healthcare organization operations, and his spouse is employed by Mirati Therapeutics, which develops cancer therapies. The terms and conditions of these arrangements are being managed in accordance with the policies and practices of NYU Langone Health.

In addition to Jiang and Oermann, other NYU investigators involved in the study were Xujin Chris Liu, BS;  Mustafa Nasir-Moin, BA; Duo Wang, PhD;  Anas Abidin, PhD; Kevin Eaton, MD; Howard Riina, MD;  Ilya Laufer, MD; Paawan Punjabi, MD; Madeline Miceli, MD; Nora Kim, MD; Cordelia Orillac, MD; Zane Schnurman, MD; Christopher Livia, MD, PhD; Hannah Weiss, MD; David Kurland, MD, PhD; Sean Neifert, MD; Yosef Dastagirzada, MD; Douglas Kondziolka, MD; Alexander Cheung, MA; Grace Yang; Ming Cao; Monda Flores; Anthony Costa, PhD; Yindalon Aphinyanaphongs, MD, PhD; and Kygunghyun Cho, PhD. Other study authors included Nima Pour Nejatian, PhD, MBA, at nVidia in Santa Clara, Calif., whose computer hardware was used to build NYUTron.

Visit AITechPark for cutting-edge Tech Trends around AI, ML, Cybersecurity, along with AITech News, and timely updates from industry professionals!

Related posts

DeepSig Named to CB Insights AI 100 List of Innovative AI Startups

Business Wire

MFDS Greenlights AI Developer VUNO Inc’s Med®-DeepCARS™

PR Newswire

Nanotronics Unveils nTelligence, It’s Gen-4 of AI Software

Business Wire