Unboxing AI | Friday Webinars @ 11 AM ET | Vasanth Venugopal, CARPL.ai (May 03) | Krithika Rangarajan, AIIMS (May 10) | Osvaldo Landi, FIDI (May 17) | Ron Nag, Medcognetics (May 24) | John Crues, RadNet (May 31) | Register Now
  • 2019-08-14

Deploying Deep Learning for Quality Control: An AI-assisted Review of Chest X-rays Reported as ‘Normal’ in Routine Clinical Practice

PURPOSE

Quality control in radiology has thus far been restricted to performing random double reads or collating information about clinical correlation - both tedious and expensive activities. We present a novel use-case for AI to double read Chest X-Rays (CXRs) and indicate a list of cases where the radiologist may have erred.

METHOD AND MATERIALS

This study on the feasibility of deploying deep learning algorithms for quality control was conducted on pooled data from four out-patient imaging departments. The radiology workflow included a \'report approval\' station where a simple, high level, binary label - \'normal\' or \'abnormal\' - was applied by radiologists. All adult CXRs marked \'normal\' were prospectively analyzed through a deep learning algorithm (LUNIT Insight, S. Korea) tuned for automated normal vs abnormal classification. Note that the algorithm used was not trained on data from the institutes and country of testing. It provided an \'abnormality score\' (range 0.00 - 1.00) and all images marked as \'abnormal\' in high sensitivity setting (threshold = 0.16) were reviewed by a sub-specialist chest radiologist with 8 years\' experience.

RESULTS

A total of 708 CXRs were marked \'normal\' by radiologists during the one-month period of the study. 46 / 708 (6.49%) of CXRs were labelled \'abnormal\' by the algorithm. Upon review of these 46 CXRs, 12 showed true abnormalities upon review. These 12 cases included four with lung opacities, three with significant blunting of costophrenic angles, two with apical fibrosis, one with a cavity, one with a nodule and one case with cardiomegaly. Appropriate corrective and preventive actions were taken, and feedback was provided to radiologists who reported these cases.

CONCLUSION

We demonstrate AI algorithms\' ability to quickly parse through large datasets and help identify errors by radiologists. This is a fast and effective method to deploy AI algorithms in clinical practice with no risk (from AI) to patients, and clear measurable positive impact.

CLINICAL RELEVANCE/APPLICATION

Radiologists work flow supported by a parallel, second read AI would allow for faster reporting as it can help reduce errors in radiology reports, improving patient-care in the process. Importantly, this quality assurance study on CXR reporting, demonstrates the potential for AI to both personalize and prioritize training modules for radiologists.

Unlock the potential of CARPL platform for optimizing radiology workflows

Talk to a Clinical Solutions Architect