illustration of a face
illustration of a face
Even accurate AI models need human caregiver context, experts state. (Photo courtesy of Getty Images)

Is artificial intelligence too limited, or could it actually be too accurate? Either way, AI needs to help patients be useful in healthcare, and the necessary guardrails involve human intervention, two recent reports state.

One of the reports plays on a common theme of AI criticism: that current technology either produces inaccurate or biased results too often to be trusted to make decisions without “keeping a human in the loop.” 

That analysis comes from the new AI Task Force of the Society for Nuclear Medicine and Medical Imaging, which recently released two papers on the ethics of AI-enabled medical devices. 

Long-term care residents often are concerned about AI and the possibility it will replace their primary caregivers, the McKnight’s Tech Daily reported earlier this year.

The second analysis raises the possibility that AI is very accurate but produces unverifiable diagnoses for which utility can’t be verified until it is too late. With older adults with cancer or stroke, the timelines are short to make treatment decisions and avoid fatalities. 

AI’s potential to produce unheeded warnings create a “Cassandra problem,” noted F. Perry Wilson, MD, an associate professor at the Yale School of Medicine, in a recent video for Medscape. In the “Iliad,” Cassandra was a Trojan prophet whose predictions of doom were ignored until it was too late. 

“In some simpler cases, machine-learning models have achieved near-perfect accuracy — Cassandra-level accuracy,” Wilson said. “A prediction is useless if it is wrong, for sure. But it’s also useless if you don’t tell anyone about it. It’s useless if you tell someone but they can’t do anything about it. And it’s useless if they could do something about it but choose not to.”

The “Cassandra” concerns may be a concern for diagnosing diseases, although within long-term care, AI-enabled sensors are starting to be deployed to great effect to help stop falling incidences among residents. 

The reports are among a flurry of analyses over the past few months that seek to define the benefits of AI going forward.

Many of those analyses have addressed the fear that AI will be used to replace humans or caregivers outright, or that an overreliance on the tech will replace novel human insight. 

What appears to unite relative skeptics such as the AI Task Force and techno-optimists such as Wilson is the belief that AI can’t be expected to produce and engender results in healthcare all by its own, like a wind-up toy, but still relies on human interpretation, not the other way around.