Site icon TopoDate

The FDA provides updated draft assistance to AI-enabled medical device developers.

​The U.S. Food and Drug Administration will provide recommendations for selling submissions on Tuesday that include the documents and information needed throughout their entire product life cycles for regulatory supervision of the safety and efficacy in order to support the continued development and marketing of safe and effective health devices enhanced by unnatural intelligence. The FDA is providing medical system developers with important product design, creation, and evidence recommendations for primary submissions following its release last month of the last set change control plan guidance for AI and machine learning submissions. The advice, which will be published in the Federal Register on January 7, would be the first to offer full product life cycle recommendations for AI-enabled devices, tying up all style, development, maintenance and documentation recommendations, if and when approved FDA said in its announcement Monday. The agency said overall, it encourages developers and innovators to engage early and often to guide activities throughout device life cycles – planning, development, testing and ongoing monitoring. After authorizing more than 1, 000 AI-enabled devices through established premarket pathways, the FDA has compiled requirements, along with the agency’s shared learnings to be the” first point-of-reference for specific recommendations that apply to these devices, from the earliest stages of development through the device’s entire life cycle”, Troy Tazbaz, director of the Digital Health Center of Excellence within the FDA’s Center for Devices and Radiological Health, said in a statement. The organization stated that the new guidance will also address strategies that address bias and transparency, as well as suggestions for thoughtful AI design and evaluation and guidance on how to demonstrate bias risk management. The FDA announced that it would take public feedback on the draft guidelines through April 7th. It is specifically asking for feedback on the AI life cycle alignment, the suitability of its generative AI recommendations, the performance monitoring approach, and the types of information that should be provided to users of AI medical devices. On February 18 and January 14th, CDRH will hold webinars to discuss its updated regulatory proposal and its final PCCPs guidance, which were released in December. Life cycle management principles can help readers navigate the complexities and risks associated with AI software in healthcare in a blog Tazbaz cowrote last year with John Nicol, a digital health specialist within FDA’s Digital Health Center for Excellence. Because AI continuously learns and adapts in real-world settings, adaptability poses significant risks,” such as exacerbating biases in data or algorithms, potentially harming patients and further disadvantaging underrepresented populations”, they wrote. FDA first sought to establish PCCPs for AI/ML devices in order to address the changing risks in the regulation of AI-enabled medical devices. ” The approach FDA is proposing in this draft guidance would ensure that important performance considerations, including with respect to race, ethnicity, disease severity, gender, age and geographical considerations, are addressed in the ongoing development, validation, implementation and monitoring of AI/ML-enabled devices”, the center’s then Deputy Director BrendanO’Leary had said. ON THE RECORD” As we continue to see exciting developments in this field, it’s important to recognize that there are specific considerations unique to AI-enabled devices”, Tazbaz said in a statement. Andrea Fox is the publisher of Healthcare IT News.
Email: afox@himss.org
Healthcare IT News is a HIMSS Media publication. 

Exit mobile version