How to build an AI that works? (and get paid!!)

How To Build An AI That Works? (And Get Paid!!)

IRIA -ICRI webinar series — Part 2

  • This is a commentary of the IRIA- ICRI webinar on the same topic that happened on May 21 2020. The presenters in the webinar were Dr. Matthew Lungren(Director, Stanford AI in Medicine & Imaging lab), Katelyn Nye (GM, AI for X-Ray, GE Healthcare), Babak Rasolzadeh (Director of Product & ML, Arterys). The session was moderated by Dr. Vidur Mahajan (Head of R&D, Centre for Advanced Research in Imaging, Neurosciences & Genomics (CARING))
  • Part 1 of the series can be found here

 

 

This week’s webinar had a diverse representation, Katelyn from the industry side, Babak representing the startups, and Matt from academia. I will highlight five perspectives that I inferred from this webinar.

AI in Radiology Can Happen (even) Without Radiologists

This may seem counterintuitive to what I said last week. But if you are aware of the radiology workflow or ecosystem, you will agree that many things within the radiology department happen before and after the radiologist sees the images. Most companies are fascinated with the idea of onboarding & consulting only the radiologists to build solutions. When Katelyn described how GE did go about building & embedding five FDA cleared algorithms on GE’s Xray machines, I felt that their approach stood out. “We tried to crowdsource what people were looking for in AI by consulting and interviewing surgeons, technicians, administrators, nurses, IT admin and different stakeholders including but not limited to radiologists” said Katelyn when describing how they started their AI journey. Indisputably, radiologists need to be involved in translating these ideas into practice, but there are several solutions for which the radiologists may not be the right consultant. They may not be aware of the default brightness-contrast/cropping/rotation settings using which the X-ray equipment generates the images and the number of mouse clicks done by a technician to manipulate and make it presentable. In many settings, the decision to redo a scan due to poor acquisition happens without the radiologists’ knowledge. Here lies the realization that there are many areas where AI can sneak into the radiology workflow without the radiologists even noticing.

Start-ups pursuing Darwinian experiments are better off staying out of pre-clinical processes

We always suspected this. There are a couple of startups that are working in the space of image creation. Most others work in the part that comes after that (post-processing, triaging, diagnosing, prognosticating). As Dr. Matthew Lungren (Matt) put it “Vendors have the ultimate control over the creation process. Companies that are looking at sinogram images or K-space images will have to concede this space to the vendors, who have inhouse expertise on how these systems work and on the workflows themselves”. Even though there is a lot of scope for collaboration between start-ups & legacy companies, like QA (Quality Assurance) & QI (Quality Improvement) of images, at least for now the game seems decided towards one side. Vendors have started from the pole position. It seems like they will eventually own this space of AI in image processing, at least as far as raw data is concerned.

On-Edge AI For Diagnosis is an Urban Legend

On-Edge AI ( AI algorithms hosted on the scanner itself) is extremely productive for image quality checks and enhancements and maybe to a certain extent for triaging. But the idea of a smart scanner that auto diagnoses several pathologies & generates reports is an urban legend, to say the least.

As Katelyn said, “There is a big image processing or IQ component on the machines. There are hundreds of different knobs on each equipment that are turned to produce an image and each of these turns might affect the performance of an algorithm. It is different from running an algorithm on an image pulled from PACS”

But that is not bad news, not at all, for AI platforms and marketplaces. As Babak said, “There is a lot of engineering and overhead to get an AI embedded on equipment or even in a workflow, and they are not optimized for testing & deployment unlike platforms built exclusively for this purpose. The downstream workflow needs a lot of testing & validation for which integrated platforms are best suited”

Early work-flow validation is an idea whose time has come

My epiphany moment in the webinar was when Babak said “Work-flow validation should happen very early in the AI development cycle, maybe even before FDA processes. There are many FDA cleared algorithms out there that are not adopted because they failed the work-flow test”.

Let’s talk of a hypothetical situation, where a company decides to build an AI for triaging of head CT scans with intracranial bleeds. It sounds valuable. Millions of dollars and thousands of man-hours can get spent to build it. Even if it works accurately, it may not modify outcomes, because, in almost all workflows, a technician is doing the scan who had learned to identify the bleeds of significance for triaging, making the algorithm just another layer of ‘advice’ on top. A simple work-flow validation during the early development cycle would have picked this up. As audacious, this may sound, this to me is the idea, whose time has come.

The next big thing for DL is not CycleGAN but the old-school “Clinical Correlation”

One of the key areas which the panelists addressed, albeit, a little indirectly is the dichotomy between data and algorithm, which needs to be better for a better AI?

Katelyn predicted the way forward for chest Xray AI as “Expanding the training data, instead of using single Xrays, using multiple priors and possibly ground truth from CT scans seem like the obvious way forward. The challenge for such algorithms needing multiple inputs is having good interoperable & integrated systems to provide the input at the time of using the algorithm”.

Matt was excited about these fusion technologies, where we provide contextual information from several modalities and sources for the algorithm to interpret. “But it is not the idea that can happen tomorrow. This is the gap for the industry that academia can fill in” he said. It is more than just coincidence that our team, at CARING, see ‘Dynamic Thresholding’ as one of our potential solutions in that direction — more on that later.

Eventually, Vidur succinctly summarized the webinar in six seconds “Build and they shall come doesn’t work in medicine, there is a lot of engineering that goes in to just putting the algorithms in place and regulatory processes are here to stay”.

To add to it, the way forward for the algorithms to get adopted (paid!) seems certain. It is not new frameworks, architectures or CycleGANs, (which are anyways jargon to most radiologists), it is getting the more familiar ‘clinical correlation’ or clinical context in place.

And we are in this for the long haul.

https://medium.com/@vasanthdrv/how-to-build-an-ai-that-works-and-get-paid-7bb715e1de1a

What do AI tech companies want from radiologists? (other than buying their products) — A Radiologist’s understanding!

What Do AI Tech Companies Want From Radiologists? (Other Than Buying Their Products) — A Radiologist’s Understanding!

Commentary on IRIA-ICRI webinar series on AI in Radiology — Part 1

*The themes in this blog originated from the IRIA- ICRI webinar on the same topic that happened on May 14 2020. The presenters in the webinar were Prashant Warier (Founder & CEO, Qure.ai), Angel Alberich-Bayarri (Founder & CEO, QUIBIM), Vijayananda J (Fellow — AI, Philips Healthcare). The session was moderated by Vidur Mahajan (Head of R&D, Centre for Advanced Research in Imaging, Neurosciences & Genomics (CARING)

 

 

The adoption of AI in Radiology will be influenced by the radiologists to a large extent. Radiology managers and hospital administrators can bring up new solutions to enhance or impact the radiology workflow. But if radiologists don’t validate them and approve them, the algorithms are not getting anywhere. We have been made to believe so far that this is the only contribution that radiologists can offer in the whole AI boom, projected by many top computer scientists & tech evangelists as an existential threat to us.

After hearing three established people from the other side (of clinical radiology) talk for over an hour on what they expect from radiologists, I was pleasantly surprised.

I got three main insights from this conversation.

Open-mindedness is a virtue of Intellectuals, and best radiologists too

All panelists agreed that having an open-mind to continuous learning & adoption of emerging roles is the essential quality that they are looking for in radiologists willing to work with them or for them. As Prashant expressed this very nicely “a Radiologist with the ability to think beyond what is possible today will be desirable for all tech companies”. I can’t agree more as these challenging times have only re-iterated this concept that we may have to learn to adapt to fast-changing practices and radiologists have always been good at that. Some of the best MR experts in the world today had never seen MR images during their training!

They still want us to do annotations, only better than before

Despite fast-paced improvements in the architectures of the AI algorithms, their dependency on clean curated data is not going away. Every developer will dream of data with the best annotations. This webinar re-iterated this. As Vijayananda put it across succinctly “ Radiologists should learn to use tool-boxes & tool kits to do faster and better annotations”. I have a personal experience of annotating over 6,000 X-rays for pneumothorax continuously for over a month. I can say this with certainty, we can’t love it. But it can be one of the best places to start and get a peek into what goes inside these algorithms. Behind all glamour & good AI, is some radiologist\’s hard work!

They don’t want us to build products but manage them & spread the word around

It is both a relief and disappointment to learn that none of the panelists expect us to code or develop algorithms. Coding for algorithms is like playing a sport for a career. You can be good at playing a sport in your gully or among peers but that can’t get you selected for a national team. It is unfair to expect startups or even established tech companies to bet their money on a hobbyist developer. So I regret to break this to all radiologists dreaming to create your algorithm. You won’t be needed for that.

But they all want us to understand the basic concepts behind the algorithm development and the statistical principles behind the validation of these algorithms. The most surprising insight for me personally was all four (including the moderator, Vidur) want to get radiologists on the business development side. As Angel clarified “We don’t want radiologists for sales (if you say that, radiologists will run away). Traditional sales won’t work for AI solutions, but we want radiologists to be part of scientific marketing”.

To summarize from across the board: “The most wanted radiologist for tech companies is the one who can participate in the ideation of a product, help build it, differentiate between different products, quantify the difference and spread the word to the world”.

It seems the tech firms have many interesting roles to offer for us, are we ready for those roles? Or Is this too much to ask for?

https://medium.com/@vasanthdrv/what-do-ai-tech-companies-want-from-radiologists-ae95c373872