The debate over how the FDA should regulate the emerging solutions that use AI for healthcare applications is one that is attracting opposite but truly equally valid points of view.
Baku Patel, former head of the FDA Digital Health unit, laid out the rationale for the lighter touch approach by the FDA. He indicated that the review of submitted AI-based technologies includes evaluation of data used to train and validate models and that the FDA is looking to improve this review through novel ways of independently assessing the data prospectively before the models are trained. He suggested that given the ongoing technology improvement with these models, the FDA is requiring post-marketing surveillance for the benefits and risks of approved technologies. I discuss some of his comments in the chapter about the Drivers to AI adoption.
The FDA is also providing an avenue of the users of such technologies to provide feedback to the FDA about specific technologies. Patel suggested that the thresholds for clearance are matched to the claims and risks for those products and that these tools will evolve over time and therefore real – world exposure is needed to improve the models. Substantial Equivalence Clearance is used for clearance of follow-on submissions: same intended use but now it has something different in terms of performance or same performance for new indications. Another avenue being explored is pre-registration of AI technologies aiming for FDA submissions. One barrier for this is that AI researchers use classifiers that often don’t know what hypothesis they’re aiming for, so harder to predict the endpoints discovered for the purposes of pre-registration. Patel indicated that one idea is for the submitters to place their data (for training and validating the model) in escrow: experts to review the data and opine on its integrity, size, diversity, etc.
Another major issue is the concept of “learning AI”. Learning AI means that the model performance and output changes over time as it ingests more data, creates more output, and receives feedback. This happens in many of the AI models currently in use in consumer industries like shopping, entertainment, and real estate. However, in healthcare, the concept of learning AI can have unintended consequences that can pose a risk to patients. As such, novel regulatory pathways need to be created for this.
On February 7, 2020, FDA announced its marketing authorization, through the De Novo pathway, of the first cardiac ultrasound software that uses artificial intelligence to guide users. This breakthrough device is notable not only for its pioneering intended use but also for the manufacturer’s utilization of a Predetermined Change Control Plan to incorporate future modifications. (36) This is a framework for modifications to AI/ML-based SaMD that relies on the principle of a “Predetermined Change Control Plan.” As discussed above, the SaMD Pre-Specifications (SPS) describe “what” aspects the manufacturer intends to change through learning, and the Algorithm Change Protocol (ACP) explains “how” the algorithm will learn and change while remaining safe and effective
The discussion paper used the term Good Machine Learning Practice, or GMLP, to describe a set of AI/ML best practices (e.g., data management, feature extraction, training, interpretability, evaluation and documentation) that are akin to good software engineering practices or quality system practices. Development and adoption of these practices is important not only for guiding the industry and product development, but also for facilitating oversight of these complex products, through manufacturer’s adherence to well established best practices and/or standards.