Before healthcare providers begin an AI pilot, it's important to determine which metrics they need to track. Many health systems don't actually do this, noted Bill Ferra, principal and head of AI. deloitte — Monaka interview last month.
By establishing the right metrics early on, he explained, providers can quickly retire a pilot if the metrics show the AI tool isn't worth using. Many health systems don't know which AI pilots to scale up and which ones to stop because they don't track the right metrics or aren't tracking any metrics at all, Fela said. Ta.
“Pilots that don't inherently create value come with a lot of pain. We work with our clients to prioritize use cases that make a big difference from a revenue perspective, and to develop the right metrics around those use cases. We have been serious about establishing it,” he declared.
In this month's interview Hims At a conference in Orlando, David Beaudry, Geisinger's chief data and informatics officer, agreed with Ferra. He said healthcare organizations need to spend more time developing plans to evaluate the success of technology trials.
In Vaudry's view, the first question a health system must ask itself before implementing an AI tool is: “What problem are we trying to solve?”
“I don't think it matters what you deploy if the problem is simply, 'We want to deploy AI.' You can write a press release and declare victory. But if you want to really make an impact, If you care about results, you need to track the right metrics,” he said.
Vaudry noted that Geisinger's most important results are related to patient care and safety.
So when it comes to the algorithms that Geisinger uses for things like cancer screening and flu complications, health systems are seeing the benefits of these tools in terms of hospitalizations prevented, lives saved, and spending reduced. He said they are tracking effectiveness. .
“These are things we don't usually think about. As an industry, we sometimes throw technology at the table and then want to sort it out later and evaluate whether it works. In many cases, it's It’s not an effective strategy,” Vaudry said. “We always try to have a rigorous evaluation plan before implementing anything.”
To develop a strong evaluation plan, a health system determines the problem it is trying to solve, which results are most important, what success looks like, and the numbers it will look at to see if the tool is working. He explained that there is a need to do so.
If a tool isn't performing well, health systems need to determine whether it's due to a strategic or implementation issue, Vawdrey added. He noted that if the problem is related to execution, there is ample opportunity to rework the pilot and try again.
Source: Metamol Works, Getty Images