Speeding up pediatric AI CXR Validation at DASA, Brazil

Speeding up pediatric AI CXR Validation at DASA, Brazil

About the Customer

DASA, headquartered in Sao Paolo, Brazil, is the 4th largest diagnostics company in the world and the largest in Brazil. They are unique in that they are the both, the largest in-vitro and radiology diagnostics company in Brazil. They invest heavily in cutting edge technologies and have research collaborations with institutes such as Harvard and Stanford Universities.

The Pain Point

Being one of the world’s largest medical imaging service providers, and given the innovation DNA of the organisation itself, radiologists at DASA are constantly evaluating newer tools to help them in their day to day clinical practice. Needless to say, many of these tools are Artificial Intelligence (AI) based products which automate parts of the radiology workflow.

One such tool is QXR from Qure.ai (India). Qure.ai is one of the world’s leading medical imaging AI companies building algorithms that automate the reporting of Chest X-Rays and Head CT scans. QXR is an AI system that automatically reports Chest X-Rays, and even discerns normal from abnormal. In early 2020, Qure.ai developed a new version of QXR which could do normal-vs-abnormal classification for pediatric Chest X-Rays, something that has always been a challenge for AI in general.

How does DASA evaluate Qure.ai’s pediatric X-ray algorithm in a simple yet statistically and clinically significant manner without allocating significant resources or budget?

CARPL Impact

DASA runs the radiology at the Sabara Children’s Hospital in Sao Paolo, Brazil. This is one of the leading pediatric hospitals in Brazil and also one of the few with a dedicated pediatric radiology department. Dr Marcelo Straus Takahashi, Radiologist at Sabara Children’s Hospital used CARPL to test Qure.ai’s pediatric X-ray algorithm on ~3,300 Chest X-rays for its ability to discern normal from abnormal.

CARPL makes this process very easy:

  • Load the X-rays onto CARPL using CARPL’s front-end
  • Run Qure.ai’s algorithm on those X-rays (in this case, to speed up the process, the X-rays were sent to Qure.ai’s API). Qure.ai’s algorithm gave an output of:
    • Normal
    • Abnormal
    • To be read (where the AI is ‘not sure’)
  • Load the normal vs abnormal status of each X-ray onto CARPL (this is a simple csv upload)

On CARPL, the following analysis was obtained for those cases which were labeled as normal or abnormal by the AI:

An astoundingly high AUC of 0.98 was obtained, and at a threshold of 50, there was only one false negative and 50 false positives. Note that the false positives would have been read by a radiologist any way, avoiding any error.

Upon digging deeper into the one false negative case, it was noted that it was clinically insignificant and hence the AI was indeed correct:



CARPL allowed Dr Takahashi to quickly and effectively test Qure.ai’s QXR solution on a pediatric population without requiring additional assistance from a statistics or data science team. Additionally he was able to deep dive into the false positive and false negative cases to see whether the errors were true errors or not, and taking a more informed decision on the performance of the algorithm. From Qure.ai’s point of view, they were able to run a retrospective validation study on a new version of their algorithm, in a true independent test setting, without any effort whatsoever.

This work was presented at RSNA 2020 by Dr Takahashi as an example of successful validation of an AI algorithm on pediatric Chest X-rays

Real-time validation of AI in production

Real-Time Validation Of AI In Production

Note: The images used below are for representational purposes only

About the Customer

Qure.ai is the world’s leading developers of Artificial Intelligence solutions for medical imaging and radiology applications. Pioneers in the field, they are amongst the most published AI research groups in the world having more than 30 publications and presentations in leading journals and conferences, including the first paper on AI in the prestigious journal – The Lancet.


The Pain Point

With tens of thousands of chest X-rays passing through the Qure.ai algorithms every day, it becomes critical for the Qure data science leadership to know the real-time performance of their algorithms across their user base. It is a well known fact that the performance of AI can vary dramatically based on patient ethnicity and equipment vendor characteristics – as an AI developer’s user-base scales, the likelihood of an error creeping through the system increases. The challenge becomes the orchestration of a mechanism where randomly picked Chest X-rays are double-read by a team of radiologists, labels established during these reads are compared against the AI outputs, and a dashboard is presented which contains real-time performance metrics (Area Under Curve, Sensitivity, Specificity etc.) with the ability to deep dive into the false positives / negatives.

How does the leadership team at Qure.ai create such a system without investing significant engineering effort?

CARPL Impact

CARPL’s Real-Time AI validation workflow allows AI developers to monitor the performance of their algorithms in real-time. Reshma Suresh, the Chief Operating Officer at Qure.ai, uses CARPL to get real-time ground truth inputs from radiologists and then compares the radiologist reads to the AI outputs, subsequently creating a real-time performance dashboard for QXR – Qure.ai’s Chest X-Ray AI algorithm.

CARPL makes the process very easy:

  • Create a Dataset on CARPL
  • Create a “Classification” Testing project on CARPL → choose the appropriate algorithm, i.e. QXR, and the Dataset created above
  • Create an Annotation Template on CARPL → create whatever fields that need to get annotated
  • Create an Annotation Project on CARPL → select the dataset and the annotation project created above
    • Link the annotation project to the Testing project
    • Select to “Auto-Assign” cases to radiologist(s)
  • Now as data keeps getting added to the Dataset, either manually or through CARPL’s APIs, the data gets inferred on the AI and gets assigned to the radiologist(s), and as the radiologist(s) keep reading the scans, the real-time dashboard keeps getting populated!

CARPL is deployed on Qure.ai’s infrastructure allowing Qure to take control of all the data that comes onto CARPL!

Example of a case which is otherwise normal, but was wrongly classified by AI as an abnormal case possibly due to poor image quality and a coiled naso-gastric tube in the eosophagus

Example of a case where the radiologist identified cardiomegaly

Representational Image of a Real-Time Validation Project on CARPL


While much has been spoken about monitoring of AI in clinical practice at the hospital level, it is even more important for AI developers themselves to monitor AI in real-time so that they may detect shifts in model performance, and intermediate as and when needed. This moves the process of AI monitoring and consequent improvement from a retrospective and post-facto process to a proactive approach. As we go and build on our vision to make CARPL the singular platform behind all clinically useful and successful AI, working with AI developers on helping them establish robust and seamless processes for monitoring of AI is key.


Anonymised chart to show a fall in AUC detected by real-time monitoring of AI by an AI developer – image for representational purposes only – not reflective of real world data.

We stay true to our mission of Bringing AI from Bench to Clinic.

CARPL.ai raises $6M from Stellaris Venture Partners

CARPL.ai raises $6M from Stellaris Venture Partners


We are excited to announce the seed funding round of $6M led by Stellaris Venture Partners, a leading enterprise software investor, with participation from leading strategic angel investors. Our heartfelt gratitude to all those who’ve been part of this incredible journey! We look forward to expanding the team in North America and continuing to build our tech stack.

CARPL.ai is used by the world’s top healthcare organizations like the Singapore Government, Massachusetts General Hospital, Radiology Partners, University Hospitals, I-MED Radiology, Albert Einstein Hospital, and Clinton Health Access Initiative, to name a few.

We are also creating an impact in the public health space by working with the Government of India to enable large-scale Tuberculosis Screening Programmes in the most remote regions of the country.

“Over the past two years, we have onboarded more than 50 AI developers having 100+ AI applications, which made us the largest AI marketplace in terms of number of AI applications offered to customers. We are proud that some of the largest healthcare enterprises in the world have vetted our technology and trust us to be their partner in their AI and automation journeys,” said Dr. Vidur Mahajan, Chief Executive Officer of CARPL.ai.

A recent joint statement by the world’s top radiology associations also brings to light the importance of validation, deployment and monitoring of AI while being used in clinical practice. We address this need through our proprietary DEV-D framework allowing healthcare providers to first Discover (D), Explore (E) and Validate (V) AI applications from the CARPL.ai marketplace, and subsequently Deploy (D) the most appropriate application across their clinical workflows.

Alok Goyal, a partner at Stellaris Venture Partners, says new technology in radiology is badly needed. “The volume of imaging scans shows a steady 9% year-on-year growth, outpacing the 1.8% growth in the number of radiologists; bridging this demand gap is a crucial challenge for healthcare providers, and we believe AI will be the key,” he says. “CARPL’s integrated platform, designed for testing, deploying, and monitoring radiology AI applications, is poised to empower healthcare providers by seamlessly integrating AI into their clinical workflows.”

To learn the full story, read more.

CARPL Teleradiology Applications

CARPL Teleradiology Applications


This app enables you to E-Mail Patient’s DICOM Data faster and in more efficient way. You do not have to manually download a study from PACS, zip it and send. This app will do all of it by itself.

Advantage of this app is that you can DICOM push the study from any workstation or PACS system.

Steps to Install the Application-

1. Download and Install the msi file provided

2. Run CARPL.mail from desktop as administrator.

3. Edit the configuration with your email configuration

4. Allow less secure apps on your email id. https://myaccount.google.com/lesssecureapps

5. Click \”Start Service\” to start the teleradiology service

6. Add a Dicom Node on your Pacs System.

                AE_TITLE : CARPLUP

                IP_ADDRESS : <IP of your system>

                PORT: 3030

7. Now push any dicom study to the above Node.

8. First time execution of code will redirect you to browser for login to your email id (Gmail Account). Complete this step by logging in and accepting installation of this app on your Email.

9. Now wait for the study to upload and get sent to you.

Note: Works with Windows system only. Linux application coming soon.


This app enables you to E-Mail Patient’s Chest CT study in a Video(mp4) format.

Advantage of this app is that you can DICOM push the study from any workstation or PACS system.

Steps to Install the Application-

1. Download and Install the msi file provided

2. Run CARPL.Video from desktop as administrator.

3. Edit the configuration with your email configuration

4. Allow less secure apps on your email id. https://myaccount.google.com/lesssecureapps

5. Click \”Start Service\” to start the teleradiology service

6. Add a Dicom Node on your Pacs System.


        IP_ADDRESS : <IP of your system>

        PORT: 3032

7. Now push any dicom study to the above Node.

8. First time execution of code will redirect you to browser for login to your email id(Gmail Account), if the video size is more than 25 mb. Complete this step by logging in and accepting installation of this app on your Email.

9. If the video converted is less than 25 mb size, it will be sent as an attachment.

10. Now wait for the study to upload and get sent to you.

Note: Works with Windows system only. Linux application coming soon.

For any Support, please contact support@caring-research.com

Thanks for using this.


How to build an AI that works? (and get paid!!)

How To Build An AI That Works? (And Get Paid!!)

IRIA -ICRI webinar series — Part 2

  • This is a commentary of the IRIA- ICRI webinar on the same topic that happened on May 21 2020. The presenters in the webinar were Dr. Matthew Lungren(Director, Stanford AI in Medicine & Imaging lab), Katelyn Nye (GM, AI for X-Ray, GE Healthcare), Babak Rasolzadeh (Director of Product & ML, Arterys). The session was moderated by Dr. Vidur Mahajan (Head of R&D, Centre for Advanced Research in Imaging, Neurosciences & Genomics (CARING))
  • Part 1 of the series can be found here



This week’s webinar had a diverse representation, Katelyn from the industry side, Babak representing the startups, and Matt from academia. I will highlight five perspectives that I inferred from this webinar.

AI in Radiology Can Happen (even) Without Radiologists

This may seem counterintuitive to what I said last week. But if you are aware of the radiology workflow or ecosystem, you will agree that many things within the radiology department happen before and after the radiologist sees the images. Most companies are fascinated with the idea of onboarding & consulting only the radiologists to build solutions. When Katelyn described how GE did go about building & embedding five FDA cleared algorithms on GE’s Xray machines, I felt that their approach stood out. “We tried to crowdsource what people were looking for in AI by consulting and interviewing surgeons, technicians, administrators, nurses, IT admin and different stakeholders including but not limited to radiologists” said Katelyn when describing how they started their AI journey. Indisputably, radiologists need to be involved in translating these ideas into practice, but there are several solutions for which the radiologists may not be the right consultant. They may not be aware of the default brightness-contrast/cropping/rotation settings using which the X-ray equipment generates the images and the number of mouse clicks done by a technician to manipulate and make it presentable. In many settings, the decision to redo a scan due to poor acquisition happens without the radiologists’ knowledge. Here lies the realization that there are many areas where AI can sneak into the radiology workflow without the radiologists even noticing.

Start-ups pursuing Darwinian experiments are better off staying out of pre-clinical processes

We always suspected this. There are a couple of startups that are working in the space of image creation. Most others work in the part that comes after that (post-processing, triaging, diagnosing, prognosticating). As Dr. Matthew Lungren (Matt) put it “Vendors have the ultimate control over the creation process. Companies that are looking at sinogram images or K-space images will have to concede this space to the vendors, who have inhouse expertise on how these systems work and on the workflows themselves”. Even though there is a lot of scope for collaboration between start-ups & legacy companies, like QA (Quality Assurance) & QI (Quality Improvement) of images, at least for now the game seems decided towards one side. Vendors have started from the pole position. It seems like they will eventually own this space of AI in image processing, at least as far as raw data is concerned.

On-Edge AI For Diagnosis is an Urban Legend

On-Edge AI ( AI algorithms hosted on the scanner itself) is extremely productive for image quality checks and enhancements and maybe to a certain extent for triaging. But the idea of a smart scanner that auto diagnoses several pathologies & generates reports is an urban legend, to say the least.

As Katelyn said, “There is a big image processing or IQ component on the machines. There are hundreds of different knobs on each equipment that are turned to produce an image and each of these turns might affect the performance of an algorithm. It is different from running an algorithm on an image pulled from PACS”

But that is not bad news, not at all, for AI platforms and marketplaces. As Babak said, “There is a lot of engineering and overhead to get an AI embedded on equipment or even in a workflow, and they are not optimized for testing & deployment unlike platforms built exclusively for this purpose. The downstream workflow needs a lot of testing & validation for which integrated platforms are best suited”

Early work-flow validation is an idea whose time has come

My epiphany moment in the webinar was when Babak said “Work-flow validation should happen very early in the AI development cycle, maybe even before FDA processes. There are many FDA cleared algorithms out there that are not adopted because they failed the work-flow test”.

Let’s talk of a hypothetical situation, where a company decides to build an AI for triaging of head CT scans with intracranial bleeds. It sounds valuable. Millions of dollars and thousands of man-hours can get spent to build it. Even if it works accurately, it may not modify outcomes, because, in almost all workflows, a technician is doing the scan who had learned to identify the bleeds of significance for triaging, making the algorithm just another layer of ‘advice’ on top. A simple work-flow validation during the early development cycle would have picked this up. As audacious, this may sound, this to me is the idea, whose time has come.

The next big thing for DL is not CycleGAN but the old-school “Clinical Correlation”

One of the key areas which the panelists addressed, albeit, a little indirectly is the dichotomy between data and algorithm, which needs to be better for a better AI?

Katelyn predicted the way forward for chest Xray AI as “Expanding the training data, instead of using single Xrays, using multiple priors and possibly ground truth from CT scans seem like the obvious way forward. The challenge for such algorithms needing multiple inputs is having good interoperable & integrated systems to provide the input at the time of using the algorithm”.

Matt was excited about these fusion technologies, where we provide contextual information from several modalities and sources for the algorithm to interpret. “But it is not the idea that can happen tomorrow. This is the gap for the industry that academia can fill in” he said. It is more than just coincidence that our team, at CARING, see ‘Dynamic Thresholding’ as one of our potential solutions in that direction — more on that later.

Eventually, Vidur succinctly summarized the webinar in six seconds “Build and they shall come doesn’t work in medicine, there is a lot of engineering that goes in to just putting the algorithms in place and regulatory processes are here to stay”.

To add to it, the way forward for the algorithms to get adopted (paid!) seems certain. It is not new frameworks, architectures or CycleGANs, (which are anyways jargon to most radiologists), it is getting the more familiar ‘clinical correlation’ or clinical context in place.

And we are in this for the long haul.


What do AI tech companies want from radiologists? (other than buying their products) — A Radiologist’s understanding!

What Do AI Tech Companies Want From Radiologists? (Other Than Buying Their Products) — A Radiologist’s Understanding!

Commentary on IRIA-ICRI webinar series on AI in Radiology — Part 1

*The themes in this blog originated from the IRIA- ICRI webinar on the same topic that happened on May 14 2020. The presenters in the webinar were Prashant Warier (Founder & CEO, Qure.ai), Angel Alberich-Bayarri (Founder & CEO, QUIBIM), Vijayananda J (Fellow — AI, Philips Healthcare). The session was moderated by Vidur Mahajan (Head of R&D, Centre for Advanced Research in Imaging, Neurosciences & Genomics (CARING)



The adoption of AI in Radiology will be influenced by the radiologists to a large extent. Radiology managers and hospital administrators can bring up new solutions to enhance or impact the radiology workflow. But if radiologists don’t validate them and approve them, the algorithms are not getting anywhere. We have been made to believe so far that this is the only contribution that radiologists can offer in the whole AI boom, projected by many top computer scientists & tech evangelists as an existential threat to us.

After hearing three established people from the other side (of clinical radiology) talk for over an hour on what they expect from radiologists, I was pleasantly surprised.

I got three main insights from this conversation.

Open-mindedness is a virtue of Intellectuals, and best radiologists too

All panelists agreed that having an open-mind to continuous learning & adoption of emerging roles is the essential quality that they are looking for in radiologists willing to work with them or for them. As Prashant expressed this very nicely “a Radiologist with the ability to think beyond what is possible today will be desirable for all tech companies”. I can’t agree more as these challenging times have only re-iterated this concept that we may have to learn to adapt to fast-changing practices and radiologists have always been good at that. Some of the best MR experts in the world today had never seen MR images during their training!

They still want us to do annotations, only better than before

Despite fast-paced improvements in the architectures of the AI algorithms, their dependency on clean curated data is not going away. Every developer will dream of data with the best annotations. This webinar re-iterated this. As Vijayananda put it across succinctly “ Radiologists should learn to use tool-boxes & tool kits to do faster and better annotations”. I have a personal experience of annotating over 6,000 X-rays for pneumothorax continuously for over a month. I can say this with certainty, we can’t love it. But it can be one of the best places to start and get a peek into what goes inside these algorithms. Behind all glamour & good AI, is some radiologist\’s hard work!

They don’t want us to build products but manage them & spread the word around

It is both a relief and disappointment to learn that none of the panelists expect us to code or develop algorithms. Coding for algorithms is like playing a sport for a career. You can be good at playing a sport in your gully or among peers but that can’t get you selected for a national team. It is unfair to expect startups or even established tech companies to bet their money on a hobbyist developer. So I regret to break this to all radiologists dreaming to create your algorithm. You won’t be needed for that.

But they all want us to understand the basic concepts behind the algorithm development and the statistical principles behind the validation of these algorithms. The most surprising insight for me personally was all four (including the moderator, Vidur) want to get radiologists on the business development side. As Angel clarified “We don’t want radiologists for sales (if you say that, radiologists will run away). Traditional sales won’t work for AI solutions, but we want radiologists to be part of scientific marketing”.

To summarize from across the board: “The most wanted radiologist for tech companies is the one who can participate in the ideation of a product, help build it, differentiate between different products, quantify the difference and spread the word to the world”.

It seems the tech firms have many interesting roles to offer for us, are we ready for those roles? Or Is this too much to ask for?


India’s Solution to the Coronavirus Pandemic lies in Local Partnership and Global Collaboration

India’s Solution To The Coronavirus Pandemic Lies In Local Partnership And Global Collaboration

The fastest way to help India survive the COVID-19 pandemic involves building AI tools by leveraging Local Public-Private-Partnerships and Global Data Collaborations
Note: A version of this article was published by HT Online available at http://bit.ly/aicoviddata



The India Disadvantage

No matter what one chooses to believe, one thing is certain – we have a completely new disease out there, one which is very virulent and hence will possibly affect almost every citizen of India in some way or the other. India’s healthcare system even without Coronavirus (or COVID-19) is completely overburdened – long queues outside hospitals are not uncommon and even during a normal working day, sadly, apart from the wards, corridors in our Government hospitals are used for taking care of patients. Now imagine what will happen when this new disease makes its course through our already overburdened healthcare system – entire India taking a ‘flu leave’ within the span of a month! One is only limited by their imagination as to the chaos that might ensue – we must not only defend against COVID-19 but need to take the fight to it, simply for our very survival!

The India Advantage

The Government realises this grave threat that COVID-19 presents to India’s medical and economic health and has hence taken some strong policy decisions to help delay and contain the spread of COVID-19 through the country. For example, completely restricting the entry of foreign nationals has widespread implications and one can only imagine how difficult it must have been for the Government to take such a decision. The silver lining is that since India has one of the lowest number of cases globally, we are in an interesting position where not only can we learn from countries’ experiences, but also use their data to our advantage.

A Prophetic Solution Already Proposed?

On Feb 1, 2020, during her budget speech, our Honourable Finance Minister, Mrs. Nirmala Sitharaman stated

“…setting up hospitals in the PPP mode. In the first phase, those Aspirational Districts will be covered, where presently there are no Ayushman empanelled hospitals…Using machine learning and AI, in the Ayushman Bharat scheme, health authorities and the medical fraternity can target disease…”

The statement that the Government will partner with private industry in areas where the Government is lacking or is unable to provide services, combined with the view that machine learning and Artificial Intelligence (AI) would be imperative for providing healthcare services to India’s masses, is precisely what is needed today. While the Government partnering with private players to expand testing and treatment for COVID-19 is being extensively discussed and debated today, it seems that everyone has forgotten Indian private sector’s computer science expertise, especially in the domains of AI and machine learning!

There are innumerable engineers in Indian R&D centres of global giants such as GE, Philips, Siemens, Google and Microsoft working on AI. Add to that the ever-growing list of deep technology start-ups and research groups working in AI for healthcare and you have a ready eco-system of specialists who can help create AI tools to fight COVID-19. The Government needs to immediately explore partnerships with such players, provide them appropriate data, and essentially do a Public-Private-Partnership for Data Analytics and AI.

The following three-pronged approach is suggested:


(A) Open-Source all Indian COVID-19 Data Immediately

More than 6,500 RT-PCR tests have been conducted for COVID-19 in India as on date, and around ~115 patients have been confirmed. All clinical data, imaging data (including raw dicom files for X-Rays and CT Scans), laboratory data, RTPCR & viral genomics data and treatment outcomes data should be immediately open-sourced and made available to researchers. Once done, the process of developing AI can be started immediately. The Government already has great examples of open-sourcing data, such as CSIR’s Indegene Project.

(B) Lead a Global Collaboration for Data Consolidation

AI systems rely on ‘learning’ from pre-existing data and India has a unique opportunity to use AI in the fight against COVID-19 since a full-blown outbreak seems a few weeks away. Since Indian data alone would be insufficient to create robust clinically applicable AI systems, we must use the time we have to take the lead in sparking a global collaboration for consolidation of patient-level COVID-19 data. Countries from around the world can contribute anonymised patient data to central databases which can be accessed by researchers across India and the world.

(C) Onboard Industry Partners

After making the data available, the Government can expect a huge response from all shapes and sizes of AI developers. There will have to be strong performance benchmarks and automated evaluation systems to first determine the best AI tools, and subsequently onboard them. Below are some examples of high-impact AI tools which can be built quickly:

(a) Predicting COVID-19 progression at population levelto create detailed disease models to drive health policy and infrastructure readiness related decisions.

(b) Diagnosing COVID-19 using medical imaging-based AI techniques making highly trained specialist doctors more effective and efficient.

(c) Predicting clinical outcomes in diagnosed patients helping determine which patients will require critical care, improving resource allocation.


Given that that our already overworked healthcare services industry will be strained to a great extent when COVID-19 spreads across the country, we must use all options available to fight. Partnering with industry for developing advanced data analytics and AI tools in a Public-Private-Partnership format provides an effective way to initiate the development of ‘first-in-the-world’ tools to predict, diagnose and prognosticate, and thereby fight COVID-19. But it will not be easy since the development of such tools relies on access to patient-level data, enough of which is not available in India. Hence, the Government would need to devise collaborations focused on data sharing with countries that have more cases of COVID-19 than India, enabling Indian AI experts to create meaningful, accurate yet quick solutions to fight this war. My research focuses almost exclusively on the evaluation and subsequent deployment of AI systems and all our tools/resources are at the country’s disposal.


Dr. Vidur Mahajan, MBBS, MBA
Head of R&D,
Centre for Advanced Research in Imaging, Neurosciences & Genomics (CARING),
New Delhi, India

The Right Choice: AI to drive the future of Indian healthcare according to GOI budget

The Right Choice: AI To Drive The Future Of Indian Healthcare According To GOI Budget

Emphasis on artificial intelligence (AI) for healthcare proves that the Government is serious about deep technology states Dr Vidur Mahajan, Head of Research & Development, Centre for Advanced Research in Imaging, Neurosciences & Genomics (CARING)

There is no debating the fact that India is grossly under-served as far as access, affordability and quality of healthcare services is concerned. With the Ayushman Bharat scheme kicking in last year, the Government attempted to solve the ‘affordability’ component, leaving access and quality of healthcare services to be solved by those providing the services. Not only did Nirmala Sitharaman, Finance Minister, Government of India announce an increase in the health budget by 10 per cent (taking it to Rs 70,000 crore), but also stated that artificial intelligence will play a key part in helping India achieve its healthcare goals, especially for Ayushman Bharat. This is truly visionary, and the fact that such a statement was part of her speech proves that the Government is now willing to walk the talk.

Below are a series of applications from Radiology, that are low hanging fruits to start implementing.

Tuberculosis screening using AI: Prime Minister, Narendra Modi has set a strong goal of eradicating tuberculosis from India by 2025. This monstrous goal is achievable only if a strong screening programme for TB, that involves active case finding. A simple X-ray of the chest has proven to be highly effective in finding patients who might have active tuberculosis, helping in the active case finding process. But unfortunately, since radiologists, who report these X-rays, are few, it becomes impossible to get such X-rays reported on time and patients are typically lost. Then comes AI – deep learning based algorithms can not only detect signs of tuberculosis in chest X-rays with a high degree of confidence, but can also do so in a few seconds! Patients can be instantaneously informed about the possibility of them having TB, and appropriate further action can be taken immediately. The Government already has thousands of X-rays scanners installed across the country – AI can be instantly deployed there and TB screening can be initiated – this is already happening in some parts of the country – Rajasthan and Chennai are two immediate examples.

Automatic reporting of scans: Significant progress is being made in the domain of automatic report generation for radiology scans by artificial intelligence. In a paper our group presented at RSNA 2019 (which also won the AuntMinnie’s Roadies Award for most viewed AI paper) we showed that AI generated text reports for chest X-rays were at par with the reports written by radiologists. This was a milestone because such artificial intelligence algorithms combine the diagnosis of disease, along with the description of the findings themselves. In our country, where unfortunately even in the largest Government hospitals, many scans go unreported, AI can step in and start reporting such scans providing much needed high-quality reports to patients in the Government sector.

Real-time quality monitoring: The dictum that ‘high-quality comes at high-cost’ is being challenged by the advent of AI. It is now possible for AI to ‘double check’ radiologists reports – whether for CT scans, X-rays or even MRI scans – and flag reports where there is a disconnect between what the AI and the radiologists say. Conventionally, radiologists would be required to review a certain sub-set of all reports, leading to incomplete / inadequate quality control – now with AI, every scan can be double-read!

Cancer screening: Whether it is automatically reading chest CT scans to screen for lung cancer, or automatically reporting mammography scans to look for early signs of breast cancer, or even automatically analysing colposcopic images for cervical cancer screening – AI is transforming the entire landscape of cancer screening. As the algorithms improve, and gain more traction, we will see an explosion in screening for tests. This will be driven almost entirely by a massive reduction in cost, which would lead to increased access to these technologies for patients, most importantly without any reduction in quality.

There are several other ways in which AI will come into diagnostics, especially imaging, and transform the care continuum, but as I mentioned, these are low hanging fruit which can literally be implemented today. While the AI algorithms exist today, either as open source tools or as commercially available products, but the limiting factor will be availability of a technology platform – how does the X-ray machine in the primary health centre, which may not have internet, get access to these algorithms? That is the sweet-spot in which we operate – last mile connectivity between the AI provider and the AI user, and we believe that for the Government’s well-intended move of investing in AI for healthcare for Ayushman Bharat to be successful, it is essential to have a unifying connecting platform, such as CARPL – the CARING Analytics Platform.

Again, I laud the Government for having such foresight, and am eager to play a role in the adoption of advanced technologies for the advancement of medical care across the country.


What do spine surgeons want from radiology reports?

What Do Spine Surgeons Want From Radiology Reports?

By Erik L. Ridley, AuntMinnie staff writer

In this presentation, researchers from India will describe how collaboration with spine surgeons can help radiologists produce more clinically relevant radiology reports for spine MRI exams.

The impetus for this study was an informal chat between their radiology group and five spine surgeons, who shared comments such as “radiology reports haven’t changed for more than 40 to 50 years” and “reports don’t reflect clinically relevant information,” according to co-author Dr. Sriram Rajan of Mahajan Imaging in New Delhi.

After reviewing spine MRI radiology reports across multiple radiology practices, the researchers found that report formats ranged from a “laundry list” approach that mentioned all levels to a clinically relevant format that correlated patient symptoms to an informal checklist. They also found variation in reporting nomenclature, including spinal canal dimensions, Rajan told AuntMinnie.com.

In hopes of ascertaining spine surgeon preferences for these reports, the researchers sent an online questionnaire to spine surgeons, querying them on their opinions on various clinically relevant topics, such as degenerative canal stenosis, nerve root impingement, nerve root anomalies, Modic changes, scoliosis, and choice of modality for preoperative evaluation.

After analyzing the 24 responses they received, the researchers determined that the report needs to include the clinically relevant information on effective spinal canal dimensions, the details of nerve root anomalies at the level of disk herniation, and the details of nerve root impingement. There was a lack of consensus, however, on Modic changes, the report format, and scoliosis assessment.

“The key implications of our study was that such two-way communication between radiologists and spine surgeons would help in improving reports and hopefully, clinical outcomes,” Rajan said.