Unboxing AI – Radiological Insights Into a Deep Neural Network for Lung Nodule Characterization

Rationale and Objectives: To explain predictions of a deep residual convolutional network for characterization of lung nodule by analyzing heat maps Materials and Methods A 20-layer deep residual CNN was trained on 1245 Chest CTs from NLST trial to predict the malignancy risk of a nodule. We used occlusion to systematically block regions of a nodule and map drops in malignancy risk score to generate clinical attribution heatmaps on 103 nodules from LIDC-IDRI dataset, which were analyzed by a thoracic radiologist. The features were described as heat inside nodule (IH)-bright areas inside nodule, peripheral heat (PH)-continuous/interrupted bright areas along nodule contours, heat in adjacent plane (AH)-brightness in scan planes juxtaposed with the nodule, satellite heat (SH)- a smaller bright spot in proximity to nodule in the same scan plane, heat map larger than nodule (LH)-bright areas corresponding to the shape of the nodule seen outside the nodule margins and heat in calcification (CH) Results These six features were assigned binary values. This feature vector was fed into a standard J48 decision tree with 10-fold cross-validation, which gave an 85 % weighted classification accuracy with a 77.8 %TP rate, 8% FP rate for benign cases and 91.8% TP and 22.2 %FP rates for malignant cases. IH was more frequently observed in nodules classified as malignant whereas PH, AH, and SH were more commonly seen in nodules classified as benign. Conclusion We discuss the potential ability of a radiologist to visually parse the deep learning algorithm-generated \’heat map\’ to identify features aiding classification

For full paper: http://vixra.org/abs/1909.0398?ref=10792316

The Algorithmic Audit: Working with Vendors to Validate Radiology-AI Algorithms – How We Do It

Abstract

There is a plethora of Artificial Intelligence (AI) tools that are being developed around the world aiming at either speeding up or improving the accuracy of radiologists. It is essential for radiologists to work with the developers of such algorithms to determine true clinical utility and risks associated with these algorithms. We present a framework, called an Algorithmic Audit, for working with the developers of such algorithms to test and improve the performance of the algorithms. The framework includes concepts of true independent validation on data that the algorithm has not seen before, curating datasets for such testing, deep examination of false positives and false negatives (to examine implications of such errors) and real-world deployment and testing of algorithms.

Link – http://vixra.org/abs/1909.0104

AI in radiology: Where are we today?

AI In Radiology: Where Are We Today?

Dr. Mahajan began the panel discussion on ‘Artificial intelligence in radiology: Where are we today?’ by asking the panelists about the differences between radiomics, deep learning, machine learning and AI. He also briefed the audience about image acquisition and image post processing in radiology.

While putting forth his views on radiomics, Dr Ganeshan elaborated on its features and quantifiable characteristics. He mentioned that radiomix has more handcrafted features and patterns on the images. Radiomics tries to emulate what a radiologist does but provides objectivity to it, independent of your background, experience, and education. The idea is to bring objectiveness, repeatability, reproducibility and so on.

According to Dr Ganeshan, AI has a black box approach where it is able to provide an answer but not able to establish from where the success came in. He also mentioned about the pros and cons to all these methodologies.

The panellists deliberated on deep learning, different aspects of image processing and quantification. They all agreed to the fact that radiomics and AI, when combined, can be a win-win situation in the radiology sector.

Sinha gave an update on how Columbia Asia Hospitals are implementing Qure.ai algorithm to interpret radiology images. She informed that work on the specific algorithm started in 2016, and concluded in 2017. The hospital has worked on how the algorithm can on nine different abnormalities using the technique and the results were satisfying. According to her, the hospital is already in the process of launching this technique where AI will help simplify the work process.

Sinha further mentioned that the algorithm is touted to become an excellent audit tool for X rays and has performed phenomenally well.

Gune mentioned that AI will be an add-on advantage for radiologists. A tool without human intervention is the future of radiology where radiologists will be able to prioritise three-four X-rays and the remaining X-rays can be looked into later.

He cited the example wherein around the globe two billion chest X-rays are developed per annum and elaborated how AI will play an important role in streamlining its reading.

Dr Kharat spoke on ways to deal with the ground realities and mentioned that when good data is available to train algorithms, the sector will surge ahead.

He urged start-up companies to invest immensely to help allocate data. He mentioned that this is the year of innovation for the sector. He also said that there is a need to look for a long-term perspective and may be in a decade or so once the product matures, it can be used in practice.

Dr Vasanth Venugopal elucidated on the need to curate AI algorithms, controll test settings, on payment modules and how companies can sustain themselves. The panellists agreed to that data needs to be anonymised to help the radiology sector become much more smarter and agile. They also recommended that one should also look for solutions to archive images intelligently.

Key highlights

  • Even though there is a lot of hype around AI, the radiology sector is at crossroads where it is concerned. The applications of AI are evolving but the fundamental aspects of AI may have reached a dead end. However, there is still hope.
  • Radiomics is high throughput extraction of quantitative imaging features or texture from imaging to decode tissue pathology and creating a high dimensional data set for feature extraction. It tries to emulate what a radiologist does, and puts those requirements in everyday practice.
  • In future, AI is going to learn from radiologists. It will be radiologists who will be the biggest data support for AI feeds.
  • It is important to note that the concept of one size fits all, cannot be applied in radiology and AI, radiomics etc.
  • There is a need for laws related to ownership of patient/medical data. This is crucial to protect the increasing misuse of medical data.

https://www.expresshealthcare.in/events/radiology-and-imaging-conclave/ai-in-radiology-where-are-we-today/414077/

Can AI Generate Clinically Appropriate X-Ray Reports? Judging the Accuracy and Clinical Validity of Deep Learning-generated Test Reports as Compared to Reports Generated by Radiologists: A Retrospective Comparative Study

Can AI Generate Clinically Appropriate X-Ray Reports? Judging The Accuracy And Clinical Validity Of Deep Learning-Generated Test Reports As Compared To Reports Generated By Radiologists: A Retrospective Comparative Study

PURPOSE

Implementations of deep learning algorithms in clinical practice are limited by the nature of output provided by the algorithms. We evaluate the accuracy, clinical validity, clarity, consistency and level of hedging of AI-generated Chest X-Ray (CXRs) compared to radiologist-generated clinical reports.

METHOD AND MATERIALS

297 CXRs done on a Conventional X-Ray system (GE Healthcare, USA) fitted with a Retrofit DR (Konika Minolta, Japan) were pulled from the PACS along with their corresponding reports. The anonymised CXRs were analysed by a CE approved deep learning-based CXR analysis algorithm (ChestEye, Oxipit, Lithuania) which detects abnormalities and autogenerates clinical reports. The algorithm is an ensemble of multiple classification, detection and segmentation neural networks capable of identifying 75 different radiological findings and perform findings\’ location extraction. The outputs from this model are used by a custom automatic text generator tailored by multiple radiologists to produce a structured and cohesive report. These models were trained using around 1 million chest X-rays coming from multiple data sources. The algorithm was not trained or tested before on CXRs from our institution. An informed review was performed by a radiologist with 9 years\’ experience to evaluate both the reports for the accuracy as well the clinical appropriateness of the reports.

RESULTS

In 236 (79%) cases, algorithm-generated reports were found to be as accurate as the radiologists\’ reports. In 16 (5%) cases, algorithm-generated reports were found to be either more accurate or more clinically appropriate. In 18 (6%) cases, the algorithm made significant diagnostic errors and in 27 (9%) cases, the algorithm-generated reports were found to be clinically inappropriate or insufficient even though the significant findings were correctly identified and localised.

CONCLUSION

We demonstrate, for the first time as of this date, a comparison between reports auto-generated by a deep learning algorithm and a practicing radiologist. We report good comparability of the clinical appropriateness of the reports generated by a DL network having high accuracy, paving the way for a new potential deployment strategy of AI in radiology.

CLINICAL RELEVANCE/APPLICATION

We report on an algorithm with potential to produce standardized, accurate reports in a manner that is easily understandable and deployable in the clinical environment.