Barriers to AI Adoption in Healthcare: Burden of Evidence
Ronald M. Razmi, MD
January 11, 2022
For AI-based solutions to become part of the daily practice of medicine, A myriad of technical, economic, regulatory, and other forms of barriers exist. Many of these have yet to be addressed sufficiently so the applications of AI in Medicine can truly take off. So, even after many of the data issues we’ve discussed here are addressed, many more challenges have to be overcome.
One of the main challenges will be that the consumers of these solutions will want real-world evidence of the efficacy of these solutions. That means the healthcare providers will ask for solid evidence that the AI-based solution was used in similar settings, as part of a trial, that showed the solution performed as expected, improved the outcomes or made the user more efficient, and there were no serious downsides to using it. Downsides would include providing the wrong output, difficult to use, or slowing down the practitioner.
Many of the solutions that have been approved by the FDA gained that distinction by showing that the model was as good as the clinician in detecting abnormalities or making predictions. However, this data was often generated in retrospective settings where the model was tested on historical data and not as part of a prospective, real-world setting where not only the accuracy of the model could be assessed, but also the result on patient outcomes or its impact on clinical decision making and efficiency. This explains, in part, the slow adoption of these solutions by clinicians, payers, and health systems.
This is not just an academic issue insisted on by the elites who want to create a high bar for new innovation to make it into the real practice of medicine before studies are done at their institutions and their blessing is secured. Take IBM Watson Health’s cancer AI algorithm (known as Watson for Oncology). Used by hundreds of hospitals around the world for recommending treatments for patients with cancer, the algorithm was based on a small number of synthetic, non-real cases with very limited input (real data) of oncologists. Many of the actual output recommendations for treatment were shown to be erroneous, such as suggesting the use of bevacizumab in a patient with severe bleeding, which represents an explicit contraindication and ‘black box’ warning for the drug.
This example also highlights the potential for major harm to patients, and thus for medical malpractice, by a flawed algorithm. Instead of a single doctor’s mistake hurting a patient, the potential for a machine algorithm inducing iatrogenic risk is vast