illustration of a face
A new forum will focus on responsible AI research. (Photo courtesy of Getty Images)

Developers for artificial intelligence tools appear to be taking concerns about the future of the technology seriously, at least enough to create advisory bodies.

A handful of companies recently announced the formation of Frontier Model Forum, which is dedicated to “ensuring the safe and responsible development of large-scale machine learning models.” That responsibility includes AI’s role in healthcare. The forum’s partners cite cancer research as one of the major challenges they hope AI can address.

The partners — OpenAI, Google, Microsoft and Anthropic are among the initial founders — also plan to investigate cybersecurity challenges, an issue that has plagued several healthcare organizations recently, with patient privacy breaches triggering lawsuits

The companies behind this initiative all have been involved in creating AI tools for healthcare organizations, including Microsoft’s Nuance, a clinical documentation tool; Google’s Med-PaLM, which answers medical questions; and OpenAI’s robot, EVE, which it plans to use in senior living communities. 

The use of actual robots in healthcare also now is the subject of recent ethics studies.

This is not the first attempt to create some kind of large-scale evaluation of AI tools. Although the Forum is starting from developers, an outside vetting system of healthcare AI tools comes from Dandelion Health, which currently is conducting a pilot program regarding AI and electrocardiograms. 

One major concern about using AI in healthcare is that it won’t fix, and may even exacerbate, existing care biases around race or other demographics, due to the data inputs the AI relies on.

In the coming months, the Forum’s organizers hope to create an advisory board, seek funding and consult with governments and public agencies, MobiHealthNews reports.