Thursday, May 31, 2018

Providers leverage AI to address high-risk patients at the right time


BY RACHEL Z. ARNDT  | MAY 26, 2018
When patients suffer cardiac arrest in a hospital, only about a quarter survive. Physicians at Ochsner Health System have sought to avoid this statistic altogether by using artificial intelligence. In what's one of the most successful examples of using AI to improve patient care, the New Orleans-based health system has reduced all codes—including cardiac arrest—in certain hospital units by 44%.
"Instead of just reacting to a code, we're able to identify individuals at high risk within hours of that happening," said Dr. Richard Milani, Ochsner's chief clinical transformation officer.
As the largest not-for-profit academic health system in Louisiana, Ochsner sees more than 1 million patients across its network every year. It also serves about 4,000 international patients annually. When Ochsner executives realized how much data they had at their disposal, they decided to make the most of it by applying AI to some of it—125,000 patients' worth, to be exact.
"We could've used traditional statistical techniques, but we realized we could start to leverage AI to make these predictions," said Jonathan Wilt, Ochsner's chief technology officer of innovation.

So developers at the health system drew on data from their Epic electronic health record system and used the vendor's machine-learning platform to create a predictive model that gives an approximately four-hour warning of adverse events. In this case, the health system developed the algorithm to run on the Epic platform, whereas in other cases, Epic offers both the platform and the algorithm.

When Ochsner's Rapid Response Team receives alerts about patients on their smartphones or Apple Watches, they jump into action and intervene with medical attention. "It's helping our physicians do better," Wilt said. "It's shifting their attention to the right people at the right time."

But before any of that could happen, developers first had to make sure they had standardized data to train the model. Luckily for the developers, because the system has been on the same EHR for several years, workflows have been relatively standardized. "We're reaping some of the benefits of that standardization," said Ochsner Chief Information Officer Laura Wilt, who is married to Jonathan.

Nevertheless, developers had to spend some time making sure the data were clean.

"A big part of the process is going through and identifying when workflows may have changed so you can account for that," Jonathan Wilt said.

While prediction models themselves aren't new, machine-learning-driven versions of them are, said Greg Kuhnen, senior director at the Advisory Board Co. "Machine learning gives you better predictions, which help you make better decisions. There's a lot of opportunity in clinical quality and efficiency," he said. "Being able to predict who needs attention—that's right in the sweet spot of what AI and machine learning can do."

Jonathan Wilt and his team are now looking into predicting other adverse events, such as C. difficile infections and pressure ulcers. Development and implementation time is fast—about two or three months for each model.

"Our ability to standardize processes a long time ago is making us well positioned to extract a model from the data," he said.
With any of these models, creating it is one thing—putting it into practice is another. "When the model's done, the next step and the question is: Can we do something to alter what would have been a negative outcome?" Milani said. "People all want to do the right thing, but you have to go through a lot of processes to change workflows," he added. "One has to create the teams and train them. That's a whole new endeavor that never existed."

So it's not necessarily the technology holding these techniques back but the logistics and the governance.

"A key to being successful is understanding the operational impact, training the users, and understanding the outcomes you're driving towards," said Seth Hain, Epic's director of analytics and machine learning. "Just like any other type of quality improvement, you want to understand how your end users will interact with it and then verify success on the other side."

There are also questions of liability, especially given that some of these models have what's called a "black box" problem: They work, but how they do that—getting from the data to an answer—remains hidden. "There are challenges around these systems being able to explain themselves," Kuhnen said. "The counter-argument is that, if we're honest, there's a lot of medicine we don't understand in the first place."

No comments:

Post a Comment