Navigating the path to AI at scale
by Andrea Fox
With so few health systems running artificial intelligence at scale, real-world advice can help healthcare organizations guide the deployment of their AI systems.
Eve Cunningham currently serves as group vice president and chief of virtual care and digital health for Providence, which includes virtual care enterprise service lines, hospital-at-home, remote patient monitoring programs and virtual nursing. She is the founder of MedPearl, a decision-support platform developed by the health system. She will be talking about how Providence approaches AI work at the 2023 HIMSS AI in Healthcare Forum, December 14-15 in San Diego.
Her panel session, “Navigating the Path from Innovation to Scale: Strategies for Success and Sustainability,” also includes Corey Lyons, senior staff solution engineer for healthcare at VMware, and Tariq Dastagir, assistant vice president of medical informatics and clinical trends, at Humana.
Their discussion will focus on how health systems can approach AI – use case evaluations, governance, information security and beyond – and what they need to consider to develop, test and scale these technologies.
A structure for AI governance is key
While large language model technology holds the latest fascination, Providence has focused on implementing AI capabilities that augment and assist clinicians.
“We shouldn’t forget about machine learning, natural language processing, optical character recognition, computer vision, robotic process automation,” and more, Cunningham explained.
Providence, which has 52 hospitals and 1,085 clinics across Alaska, California, Montana, New Mexico, Oregon, Texas and Washington in its portfolio, created a governance structure for AI that involves its clinicians and healthcare administrators, and drives their paths to AI implementations.
“We lay our foundations specifically in thinking about the fact that a clinician always needs to be in the loop,” she said of work on the clinical side.
The purpose of Providence’s AI governance structure – led by Sarah Vaezy, Providence’s chief digital and strategy officer, and Mark Premo, its chief data officer – is to allow the organization to innovate without bogging down the process with numerous committees.
She called it a “Top-down, bottoms-up approach.”
There are four subgroups leading the AI charge at Providence – consumer-facing, workforce, administration and back office. They each evaluate different technology opportunities, different use cases and more for their respective workforces.
There have been several staff requests related to radiology use cases, she noted.
“There’s a lot of maturity in that space,” so Providence is looking to accelerate the evaluation process for those implementations.
Cunningham said there have also been many requests to leverage LLMs for speeding up workflows in the clinical settings – “mundane, repetitive tasks” – that are being considered.
With an AI-governance framework established and foci on ROI, as well as key performance indicators, Providence seems to be beyond fatigue for AI-driven ideas, she said.
Validating needs and balancing resources
To evaluate each AI use case, Providence’s AI workgroups first ask how the idea will help challenges related to three strategic priorities outlined in the governance structure.
Those foundational priorities are workforce shortage and burnout, hospital throughput and capacity, and care fragmentation, Cunningham explained.
The workgroups first validate that there is a problem for end users that the technology could potentially solve, and then ask, “Is it hitting the mark on addressing those issues?”
Then, they validate the demand for a system or solve – the impact to prioritization, resources needed, the rate it might be adopted and the ease of integration into Providence’s electronic health record workflows, she said.
“If it’s a really narrow use case that really has a very limited audience or limited impact, but it’s going to require a lot of resources, you know that might not be the best thing for us to start with.”
There could also be additional adoption issues.
“It might work really well for translating certain types of labs or imaging studies, and maybe not others, so there would be some adoption issues that we would potentially need to consider,” Cunningham said.
When getting ready to look at how a potential AI system may be developed – “build versus buy” – if Providence decides to work with a vendor, the workgroups evaluate the maturity in the marketplace, as well as the information-security aspects, said Cunningham.
“Does it make sense, or are we going be like a dev shop for a vendor as they build a solution, which creates some administrative burden on the people involved with implementation?”
A return on investment can be hard to measure for AI tools that speed up workflows, she said. For example, they don’t reduce inbox messages or staffing, she noted.
Sometimes, great AI system ideas do not get adopted by Providence.
“We’ve actually spent money and resources on incubating some things, or working with the vendor, and then saying, ‘You know what? This isn’t giving us the result that we expected,'” she explained.
Those solutions do not go to scale, and the workgroups shift to prioritizing other AI opportunities. But those efforts are not all for naught.
“We’ve been able to learn from them, and then take those learnings and bring it to a better solution,” said Cunningham.