Industry and government appear to be heading in the direction of shared beliefs, even as we enter 2025, according to Coalition for Healthcare AI CEO Brian Anderson, from gaining discussion on the concept of responsible synthetic knowledge to waxing on about semi-autonomous health AI usage scenarios. We need our politicians and governmental bodies to be aware of the types of systems being developed in the personal sector regarding what concerned AI in the health sector should look like, Anderson said in a statement to Healthcare IT News. Congruence to dateThe technology sector and the federal medical regulatory bodies have spent a lot of time researching AI type cards, or” AI diet labels,” a digestible form of communication used to identify important aspects of AI design advancement for users. We spoke with Anderson about the government’s new accomplishments and his ideas for developing a public-private model for the safe dissemination of medical AI on Thursday when the CHAI released its open-source edition of its document AI type card. It’s wonderful to see the intersection between the public sector regulatory community and the private sector innovation community. With a plan to release an update sometime in the next six months, Anderson said he is hopeful that various regulators are and will continue to be moving in the same direction as CHAI is looking for feedback on its open-source draft model card this month. The document “flows from” the Office of the National Coordinator for Healthcare Technology’s Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency and Information rule. Specifically, he cited a degree of alignment on AI requirements with medical device regulators. Earlier this week, the U. S. Food and Drug Administration included an example of a voluntary AI model card in its draft total product life cycle recommendations – design, development, maintenance and documentation – for AI-enabled devices. One of the exciting things about looking at the FDA example model card and also looking at ONC’s HTI-1 rule is how closely aligned CHAI’s model card is, Anderson said. A model card for AI-enabled medical devices can address communication issues by presenting important information to healthcare users – patients, clinicians, regulators, and researchers– and the general public, according to the FDA. In its draft guidance, FDA stated that “research has demonstrated that the use of a model card can increase user trust and understanding.” They serve as a way to consistently sum up the key characteristics, performance, and limitations of AI-enabled devices, and can be used to succinctly describe them. However, Anderson argued that “private and public sector stakeholder groups must work together to inform one another.” When asked how the incoming administration,” and every leader that I’ve spoken to in the Senate and in the House, is very interested in understanding how they can partner in a public-private partnership with organizations like CHAI,” Anderson responded that work still needs to be done in order to continue with the more rigorous healthcare AI tasks, such as government and industry alignment on AI assurance labs and how they will function. It’s “very likely that the metrics and methodologies we use to evaluate those emerging capabilities will need to change or need to be created,” he said, and recognizing from the incoming administration that they are willing to work with us and work alongside us- and hopefully provide that time. Before the FDA released its draft total lifecycle guidance for medical devices, it had already finalized the Predetermined Change Control Plan review for AI and machine learning submissions, avoiding the need for additional marketing submissions. ” As we think about the various sections of the model card, different data will need to be included – different evaluation results, different metrics…different kinds of use cases”, etc, said Anderson. ” That kind of flexibility is going to be really important”, he added, noting that an AI model or system’s nutrition label will require regular updates, “certainly, at least at an annual level” .For providers, there is a great amount of complexity to consider when using AI-enabled clinical decision-support tools to minimize mistakes or oversights. We will have to work through this again and have to achieve perfect transparency, he said. On user-friendly model cards, whether or not a model was trained on a particular set of attributes that might be relevant to a particular patient may not be included. ” You can include all the information under the sun in these model cards, but the vendor community would be at risk of disclosing]intellectual property”, he said. Therefore, Anderson said,” It’s a balance of protecting the IP of the vendor and providing the customer, the doctor in this case, with the necessary information to make the right choice about whether or not to use that model with the patient they have in front of them.” He acknowledged that” the causal relationship is really profound on how that might affect a certain outcome for the patient in front of you.” HTI-1’s 31 categorical areas, which are aimed at electronic health records and other certified health IT, are a “really great starting point,” Anderson said, but they are insufficient for the various use cases of AI, particularly in the direct-to-consumer space. There will be a lot of new use cases coming up over the next year, he said, noting that the model cards we’re developing are intended to be used fairly broadly across those various use cases. Over the next two to five years, however, evaluating healthcare AI models is going to get even more complicated, raising questions about how they define “human flourishing”.Anderson said that he believes use cases are going to be intimately tied to health AI agents, and developing trust frameworks around those is going to require the support of “ethicists, philosophers, sociologists, and spiritual leaders” to help advise technologists and AI experts in thinking through an evaluation framework for those tools. In the future of agentic AI, he said, “developing the kind of framework for evaluation will be a real challenge.” ” It’s a very intimate personal space, and how do we build that trust with those models? How do we evaluate those models”? Beginning with the following year, Anderson stated that CHAI will spearhead a “very intentional effort to bring community members together and stakeholders that you wouldn’t necessarily think first of the kinds of stakeholders that you would include in an effort like this.” We must make sure these models are in line with our values, and there is no formula for how to evaluate a model. I don’t know how to do that yet. I don’t think anybody does, yet “.Andrea Fox is senior editor of Healthcare IT News.
Email: afox@himss.org
Healthcare IT News is a HIMSS Media publication.