Unboxing AI | Live Webinar Every Friday | Rakesh Mullick, GE Healthcare (Dec 13) @ 11 am ET | Sham Sokka, Deephealth (Dec 20) @ 11 am ET | Register Now
  • 2024-03-27

How to build an AI that works? (and get paid!!)

IRIA -ICRI webinar series — Part 2

This week’s webinar had a diverse representation, Katelyn from the industry side, Babak representing the startups, and Matt from academia. I will highlight five perspectives that I inferred from this webinar.

AI in Radiology Can Happen (even) Without Radiologists

This may seem counterintuitive to what I said last week. But if you are aware of the radiology workflow or ecosystem, you will agree that many things within the radiology department happen before and after the radiologist sees the images. Most companies are fascinated with the idea of onboarding & consulting only the radiologists to build solutions. When Katelyn described how GE did go about building & embedding five FDA cleared algorithms on GE’s Xray machines, I felt that their approach stood out. “We tried to crowdsource what people were looking for in AI by consulting and interviewing surgeons, technicians, administrators, nurses, IT admin and different stakeholders including but not limited to radiologists” said Katelyn when describing how they started their AI journey. Indisputably, radiologists need to be involved in translating these ideas into practice, but there are several solutions for which the radiologists may not be the right consultant. They may not be aware of the default brightness-contrast/cropping/rotation settings using which the X-ray equipment generates the images and the number of mouse clicks done by a technician to manipulate and make it presentable. In many settings, the decision to redo a scan due to poor acquisition happens without the radiologists’ knowledge. Here lies the realization that there are many areas where AI can sneak into the radiology workflow without the radiologists even noticing.

Start-ups pursuing Darwinian experiments are better off staying out of pre-clinical processes

We always suspected this. There are a couple of startups that are working in the space of image creation. Most others work in the part that comes after that (post-processing, triaging, diagnosing, prognosticating). As Dr. Matthew Lungren (Matt) put it “Vendors have the ultimate control over the creation process. Companies that are looking at sinogram images or K-space images will have to concede this space to the vendors, who have inhouse expertise on how these systems work and on the workflows themselves”. Even though there is a lot of scope for collaboration between start-ups & legacy companies, like QA (Quality Assurance) & QI (Quality Improvement) of images, at least for now the game seems decided towards one side. Vendors have started from the pole position. It seems like they will eventually own this space of AI in image processing, at least as far as raw data is concerned.

On-Edge AI For Diagnosis is an Urban Legend

On-Edge AI ( AI algorithms hosted on the scanner itself) is extremely productive for image quality checks and enhancements and maybe to a certain extent for triaging. But the idea of a smart scanner that auto diagnoses several pathologies & generates reports is an urban legend, to say the least.

As Katelyn said, “There is a big image processing or IQ component on the machines. There are hundreds of different knobs on each equipment that are turned to produce an image and each of these turns might affect the performance of an algorithm. It is different from running an algorithm on an image pulled from PACS”

But that is not bad news, not at all, for AI platforms and marketplaces. As Babak said, “There is a lot of engineering and overhead to get an AI embedded on equipment or even in a workflow, and they are not optimized for testing & deployment unlike platforms built exclusively for this purpose. The downstream workflow needs a lot of testing & validation for which integrated platforms are best suited”

Early work-flow validation is an idea whose time has come

My epiphany moment in the webinar was when Babak said “Work-flow validation should happen very early in the AI development cycle, maybe even before FDA processes. There are many FDA cleared algorithms out there that are not adopted because they failed the work-flow test”.

Let’s talk of a hypothetical situation, where a company decides to build an AI for triaging of head CT scans with intracranial bleeds. It sounds valuable. Millions of dollars and thousands of man-hours can get spent to build it. Even if it works accurately, it may not modify outcomes, because, in almost all workflows, a technician is doing the scan who had learned to identify the bleeds of significance for triaging, making the algorithm just another layer of ‘advice’ on top. A simple work-flow validation during the early development cycle would have picked this up. As audacious, this may sound, this to me is the idea, whose time has come.

The next big thing for DL is not CycleGAN but the old-school “Clinical Correlation”

One of the key areas which the panelists addressed, albeit, a little indirectly is the dichotomy between data and algorithm, which needs to be better for a better AI?

Katelyn predicted the way forward for chest Xray AI as “Expanding the training data, instead of using single Xrays, using multiple priors and possibly ground truth from CT scans seem like the obvious way forward. The challenge for such algorithms needing multiple inputs is having good interoperable & integrated systems to provide the input at the time of using the algorithm”.

Matt was excited about these fusion technologies, where we provide contextual information from several modalities and sources for the algorithm to interpret. “But it is not the idea that can happen tomorrow. This is the gap for the industry that academia can fill in” he said. It is more than just coincidence that our team, at CARING, see ‘Dynamic Thresholding’ as one of our potential solutions in that direction — more on that later.

Eventually, Vidur succinctly summarized the webinar in six seconds “Build and they shall come doesn’t work in medicine, there is a lot of engineering that goes in to just putting the algorithms in place and regulatory processes are here to stay”.

To add to it, the way forward for the algorithms to get adopted (paid!) seems certain. It is not new frameworks, architectures or CycleGANs, (which are anyways jargon to most radiologists), it is getting the more familiar ‘clinical correlation’ or clinical context in place.

And we are in this for the long haul.

https://medium.com/@vasanthdrv/how-to-build-an-ai-that-works-and-get-paid-7bb715e1de1a

Unlock the potential of CARPL platform for optimizing radiology workflows

Talk to a Clinical Solutions Architect