Written by : Jayati Dubey
January 8, 2025
One of the central aspects of the draft guidance is the concept of AI model credibility. The FDA defines credibility as trust in the performance of an AI model for its intended context of use.
The US Food and Drug Administration (FDA) has released its first-ever draft guidance addressing the use of artificial intelligence (AI) in drug and biological product development.
This guidance provides recommendations for leveraging AI to support regulatory decisions regarding a product’s safety, effectiveness, and quality.
FDA Commissioner Robert M Califf, MD, emphasized the agency’s dedication to advancing innovation while maintaining high scientific and regulatory standards.
“With the appropriate safeguards in place, artificial intelligence has transformative potential to advance clinical research and accelerate medical product development to improve patient care,” he stated.
The FDA’s draft guidance reflects the growing role of AI in drug development. Since 2016, there has been a significant increase in the use of AI in regulatory submissions.
AI has proven valuable in predicting patient outcomes, identifying predictors of disease progression, and processing large datasets from real-world evidence or digital health technologies.
One of the central aspects of the draft guidance is the concept of AI model credibility. The FDA defines credibility as trust in the performance of an AI model for its intended context of use.
The context of use outlines how the AI model addresses a specific regulatory or clinical question.
To ensure credibility, the FDA provides a risk-based framework for sponsors to assess and validate the AI model’s output.
This includes determining the necessary activities to establish that the model’s results are reliable and applicable to its defined use.
The FDA further advises sponsors to engage with the agency early in their drug development process regarding AI credibility assessments. This approach aligns with the FDA’s current review practices for applications involving AI components.
The draft guidance is the result of extensive collaboration among the FDA’s human and animal medical product centers, the Office of Inspections and Investigations, the Oncology Center of Excellence, and the Office of Combination Products.
This joint effort aims to ensure a consistent regulatory approach across the agency.
The FDA has also incorporated feedback from stakeholders such as sponsors, technology developers, manufacturers, academics, and other experts.
Specifically, the guidance was informed by discussions at an FDA-sponsored workshop organized by the Duke-Margolis Institute for Health Policy in December 2022.
Additionally, more than 800 public comments on two discussion papers published in May 2023, as well as the FDA’s experience with over 500 AI-related submissions since 2016, shaped the guidance.
The FDA is seeking public input on the draft guidance, titled Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products.
Interested parties have 90 days to provide comments, particularly on how well the draft aligns with industry practices and whether the engagement options provided for sponsors and stakeholders are sufficient.
The agency will consider public feedback before finalizing the guidance. By soliciting input, the FDA aims to ensure the recommendations are practical and effective for stakeholders working with AI in drug development.
In a parallel effort, the FDA has also published draft guidance on AI-enabled medical devices.
The draft guidance provides a foundation for sponsors to navigate the complexities of integrating AI into their development pipelines while adhering to robust regulatory standards.
Stay tuned for more such updates on Digital Health News.