illustration of a face
illustration of a face
(Photo courtesy of Getty Images)

Even if artificial intelligence tools, when deployed in healthcare contexts, produced no errors and produced accurate results, AI tools still could be problematic if they don’t fit well with an organization’s cultural and procedure norms, researchers argue.

A team led by experts at Carnegie Mellon University have proposed a new framework for evaluating AI tools in healthcare, which considers how well AI fits into existing healthcare systems and its practical utility for patients. 

“AI tools are of great interest to healthcare organizations for their potential to improve patient care,” the study authors wrote. “Yet their translation into clinical settings remains inconsistent. One of the reasons for this gap is that good technical performance does not inevitably result in patient benefit.”

Such a “normative” framework could be used whenever healthcare administrators, such as senior care operators, are considering a new AI tool, the researchers argue.

In additional to valuing concepts such as an AI’s explainability and transparency, the framework asks that AI tools satisfy the following guidelines: 

  • A well defined use-case can be established, such as how an AI tool may allow a healthcare system to function more efficiently. 
  • Clear specification of the task the AI tool performs. 
  • Relevant performance benchmarks.
  • Clear parameters and limitations for the AI tool.
  • A protocol for monitoring how the AI tool is deployed.

AI tools increasingly are cropping up in long-term care settings, such as determining behavioral patterns for residents and as care coordination assistants. 

Concerns about how even the best AI tools may disrupt existing care regimens echoes similar experiences with how healthcare companies adopted electronic health records. Many electronic health records tools ended up complicating care because clinicians and patients or residents had trouble using some interfaces, experts have noted.

Many AI developers already are aware of the challenges of integrating tools into existing healthcare systems. Not only do AI tools not exist independently of human clinicians and caregivers, but AI should be “infused” into existing routines, one CEO told the McKnight’s Tech Daily last month.