Artificial intelligence (AI) and machine learning have the potential to fundamentally transform the delivery of healthcare, leading the FDA to turn its focus on AI-based medical devices. As a result, the FDA is announcing steps to consider a new regulatory framework specifically tailored to promote the development of safe and effective medical devices that use advanced AI algorithms.

These types of algorithms are already being used to aid in screening for diseases and to provide treatment recommendations. In fact, last year, the FDA authorized an AI-based device for detecting diabetic retinopathy, an eye disease that can cause vision loss. The agency also authorized a second AI-based device for alerting providers of a potential stroke in patients.

The authorization of these technologies was a harbinger of progress that the FDA expects to see as more medical devices incorporate advanced AI algorithms to improve their performance and safety.

“Artificial intelligence has helped transform industries like finance and manufacturing, and I’m confident that these technologies will have a profound and positive impact on healthcare,” says FDA Commissioner Scott Gottlieb, MD. “I can envision a world where, one day, artificial intelligence can help detect and treat challenging health problems, for example by recognizing the signs of disease well in advance of what we can do today. These tools can provide more time for intervention, identifying effective therapies and ultimately saving lives.”

The AI technologies granted marketing authorization and cleared by the agency so far are generally called “locked” algorithms that don’t continually adapt or learn every time the algorithm is used. These locked algorithms are modified by the manufacturer at intervals, which includes “training” of the algorithm using new data, followed by manual verification and validation of the updated algorithm. But there’s a great deal of promise beyond locked algorithms that’s ripe for application in the health care space, and which requires careful oversight to ensure the benefits of these advanced technologies outweigh the risks to patients, the FDA said in a statement.

“We are exploring a framework that would allow for modifications to algorithms to be made from real-world learning and adaptation, while still ensuring safety and effectiveness of the software as a medical device is maintained,” adds Gottlieb. “A new approach to these technologies would address the need for the algorithms to learn and adapt when used in the real world. It would be a more tailored fit than our existing regulatory paradigm for software as a medical device. For traditional software as a medical device, when modifications are made that could significantly affect the safety or effectiveness of the device, a sponsor must make a submission demonstrating the safety and effectiveness of the modifications.”

With AI, because the device evolves based on what it learns while it’s in real-world use, the FDA is working to develop a framework that allows the software to evolve in ways to improve its performance while ensuring that changes meet the agency’s gold standard for safety and effectiveness throughout the product’s lifecycle—from premarket design throughout the device’s use on the market. The ideas are the foundational first step to developing a total-product-lifecycle approach to regulating these algorithms that use real-world data to adapt and improve.

The FDA is also considering how an approach that enables the evaluation and monitoring of a software product from its premarket development to post-market performance could provide reasonable assurance of safety and effectiveness and allow the FDA’s regulatory oversight to embrace the iterative nature of these AI products while ensuring that the agency’s standards for safety and effectiveness are maintained.

This first step in developing the approach outlines information specific to devices that include AI algorithms that make real-world modifications that the agency might require for premarket review. They include the algorithm’s performance, the manufacturer’s plan for modifications, and the ability of the manufacturer to manage and control risks of the modifications.

The agency may also intend to review what’s referred to as “software’s predetermined change control plan.” The predetermined change control plan would provide detailed information to the agency about the types of anticipated modifications based on the algorithm’s re-training and update strategy, and the associated methodology being used to implement those changes in a controlled manner that manages risks to patients.

The goal of the framework is to assure that ongoing algorithm changes follow pre-specified performance objectives and change control plans; use a validation process that ensures improvements to the performance, safety, and effectiveness of the AI software; and include real-world monitoring of performance once the device is on the market to ensure safety and effectiveness are maintained.

“We’re exploring this approach because we believe that it will enable beneficial and innovative artificial intelligence software to come to market while still ensuring the device’s benefits continue to outweigh its risks,” says Gottlieb. “We anticipate several more steps in the future, including issuing draft guidance that’ll be informed by the feedback on today’s discussion paper.”

Gottlieb adds: “While I know that there are more steps to take in our regulation of artificial intelligence algorithms, the first step taken today will help promote ideas on the development of safe, beneficial, and innovative medical products.”