Concepts in U.S. Food and Drug Administration Regulation of Artificial Intelligence for Medical Imaging


Although extensive attention has been focused on the enormous potential of artificial intelligence (AI) technology, a major question remains: how should this fundamentally new technology be regulated? The purpose of this article is to provide an overview of the pathways developed by the U.S. Food and Drug Administration to regulate the incorporation of AI in medical imaging.


AI is the new wave of innovation in health care. The technology holds promising applications to revolutionize all aspects of medicine.

Read More:

AI will NOT replace radiologists-Dr Vasantha Venugopal

AI Will NOT Replace Radiologists-Dr Vasantha Venugopal

Dr Vasantha Venugopal, Imaging Lead, Caring, Mahajan Imaging, analyse on the claims from AI developers that threaten to replace radiologist

The recent developments and news articles around applications of AI in radiology can be summarised in one statement ‘Hype kills value.’ Last year at RSNA, I had the honour of delivering a talk on ‘The mirage of AI Substituting Radiologists.’ Six months later, there are at least a dozen radiology AI applications with CE approval and half a dozen with FDA approval. But practically no one is using them. AI enthusiasts might say these are early times. Starting from the inventor of Deep Learning, Professor Geoffrey Hinton, when he infamously compared radiologists to a ‘coyote that had stepped over the cliff’ to the recent headline grabbing articles that cry ‘AI beats radiologist’, all of them are doing a great disservice to the revolutionary Deep Learning (DL) methodology.

Claim 1: Error rates in radiology will be reduced
The average error rate in radiology reports range anywhere from 20 to 30 per cent. Leonard Berlin has published extensively on this issue and cites a real-time day-to-day radiologist error rate averaging 3–5 per cent, and a retrospective error rate among radiologic studies averaging 30 per cent. The average performance of the AI algorithms reported in literature range on the higher side of 90 per cent. So even if the AI misses some findings, it will still be better than radiologists. The big assumption here is the errors made by AI will be comparable to Radiologists. The problem with this assumption is two-fold. The errors made by radiologists are predictable with certain patterns attributable to reasons like fatigue, lack of expertise, distractions or practice biases. These errors are hence preventable by appropriate interventions whereas the errors made by AI systems are not predictable, due to their inherent black box nature and hence can’t be prevented by planned interventions. Till the time reliable mechanisms to explain the functioning of these algorithms are developed, this unpredictability will create a need for radiologists who monitor these algorithms.

Claim 2: AI can see patterns and abstractions not evident to humans
The overarching theme of deep learning has been its ability to decipher representations from data and derive abstractions to an extent not possible for humans. This applies more so to images with innumerable inherent contrasts. The raw data that are acquired at the scanner level, whether sinogram data or k-space data, are broken down, in some form, simplified and made readable by radiologists. In that process, there is loss of large amount of information. Now scientists including engineers from our team are working on ways to apply DL on the raw data. Any reproducible success in that front which seems more possible now than a couple of years back will make radiology, more of a field of data analytics. But till the point that such abstractions still need to be understood by human surgeons and physicians to treat the patients, radiologists will be needed to be that bridge between imaging data and physicians.

Claim 3. AI assistant will enable one radiologist to do the reads done by many radiologists now.
One more prevalent argument is that AI will empower and augment radiologists enabling them to read 500 scans instead of 50 every day. One radiologist will be doing the job of ten radiologists and nine radiologists would have been replaced by our AI assistant. The fundamental flaw in this argument is the lack of understanding of ever-increasing demand for health care services. With increasing awareness among masses and penetration of insurance coverage, more people are getting covered by screening and diagnostic imaging services there is an exponential increase in demand for trained radiologists which can only be partly met by AI if at all it delivers. The analogy here would be of pathology and lab medicine. Almost all of biochemistry and lab tests are fully automated, where you put in serum, and get out the report – has that led to a reduction in the need for pathologists? The lab industry has grown exponentially enabling patients to access these services across geographies at fraction of the cost!

The final barrier that AI will never be able to cross to replace radiologists is professional and legal liability. As things are today, developers of AI will never commit to taking responsibility for the performance of their algorithms since they realise that most radiological diagnoses are arrived after considering the clinical background and assimilating the clinical information which is a unique skill of radiologists that no machine will ever be able to replicate.

AI will replace radiologists-Dr Vidur Mahajan

AI Will Replace Radiologists-Dr Vidur Mahajan

Dr Vidur Mahajan, Associate Director, Mahajan Imaging, Head (R&D), CARING

Any discussion around replacement of any ‘profession’ requires detailed analysis into what the professional entails, and what tasks the professional performs. Only once this list of ‘sub-tasks’ is created, would it be possible to develop an objective opinion about the potential ‘replacement’ of a profession.

Radiologists today, broadly, perform the following three tasks – taking measurements and identifying abnormalities on imaging scans, drawing inferences from these abnormalities about the possible pathologies and diseases that patients might be suffering from, and most importantly, communication and clarification of these results with patients and fellow clinicians. From a cognition standpoint, the three tasks have an increasing cognitive need, and hence are that much more difficult for a machine to perform. That is, until Deep Learning came into being. Deep Learning is a new machine learning technique that enables very powerful computers to automatically draw inferences about patterns and relationships from a dataset without having to be ‘programmed’ explicitly by humans. For example, let’s say we want to develop a Deep Learning algorithm to automatically detect pleural effusion on Chest X-Ray. Using traditional machine learning methods would involve defining what a pleural effusion looks like on an X-Ray and trying to ‘teach’ the machine. With Deep Learning, if you have enough examples (say 10s of thousands) of X-Rays with and without pleural effusion, the machine should be able to learn itself, without us having to define the features of pleural effusion.

This is the true power of AI today – when machines can learn themselves, and therefore I believe, that most of the tasks that radiologists perform can be automated and done by machines. Today, technology exists where anyone can develop automated measuring and segmenting tools. Nvidia, the company that makes hardware on which all AI is developed, has developed a toolkit called CLARA using which computers can be trained how to segment various body parts. Similarly, on the actual diagnosis front, there are innumerable papers that claim AI performance, which is superior to radiologists, simply because there is so much inter-reader disagreement between radiologists.

Notable examples include Chest X-Ray, Chest CT and Head CT. In fact, we have also seen that computers have the unique advantage of being able to ‘see’ what humans cannot, and our group has created two such algorithms – one where a PET image can be created from a CT image, and another with a Diffusion Weighted MR Image can be created from a T2w image. Lastly, many argue that patient communication and empathy are areas where humans (radiologists) would always be needed – to that I ask – what percentage of your day did you spend in front of people vs in front of a screen? Machines are getting good at communicating, and soon it is possible that your Alexa or Google Home might be more empathetic and compassionate that a doctor (or a radiologist) since it will know what mood you are in, and what to say so that your mood improves, all using deep learning and other algorithms in the background.

In summary, I would like to say three things. First, humans are very bad at predicting with exponential extrapolation of time. What that means is, that when we make a prediction, we make one with linear judgement, we are unable to fathom the effect that exponential increase in technology can have in our day to day life. Cell phones are a great example of this. Second, radiologists exist at the top of the ‘cognitive food chain’ – the work that they do is possibly the most complicated and difficult work.

That implies that by the time AI catches up and ‘replaces’ radiologists, imagine what other professions would have been replaced – General Physicians? Oncologists? Surgeons? Architects? Engineers? Software Developers? Drivers? And so on. So in a nutshell, radiologists should start adopting AI in their practice and use it as a tool to be better for patients – being replaced or not a thing of the future, and when it happens, we wont even find out because there would always be something else to do. And finally, you may notice that I have refrained from stating a timeline.