What do spine surgeons want from radiology reports?

What Do Spine Surgeons Want From Radiology Reports?

By Erik L. Ridley, AuntMinnie staff writer

In this presentation, researchers from India will describe how collaboration with spine surgeons can help radiologists produce more clinically relevant radiology reports for spine MRI exams.

The impetus for this study was an informal chat between their radiology group and five spine surgeons, who shared comments such as “radiology reports haven’t changed for more than 40 to 50 years” and “reports don’t reflect clinically relevant information,” according to co-author Dr. Sriram Rajan of Mahajan Imaging in New Delhi.

After reviewing spine MRI radiology reports across multiple radiology practices, the researchers found that report formats ranged from a “laundry list” approach that mentioned all levels to a clinically relevant format that correlated patient symptoms to an informal checklist. They also found variation in reporting nomenclature, including spinal canal dimensions, Rajan told AuntMinnie.com.

In hopes of ascertaining spine surgeon preferences for these reports, the researchers sent an online questionnaire to spine surgeons, querying them on their opinions on various clinically relevant topics, such as degenerative canal stenosis, nerve root impingement, nerve root anomalies, Modic changes, scoliosis, and choice of modality for preoperative evaluation.

After analyzing the 24 responses they received, the researchers determined that the report needs to include the clinically relevant information on effective spinal canal dimensions, the details of nerve root anomalies at the level of disk herniation, and the details of nerve root impingement. There was a lack of consensus, however, on Modic changes, the report format, and scoliosis assessment.

“The key implications of our study was that such two-way communication between radiologists and spine surgeons would help in improving reports and hopefully, clinical outcomes,” Rajan said.

https://www.auntminnie.com/index.aspx?sec=road&sub=def&pag=dis&ItemID=127283

Can AI generate clinically appropriate x-ray reports?

Can AI Generate Clinically Appropriate X-Ray Reports?

By Wayne Forrest, AuntMinnie.com staff writer

Artificial intelligence (AI) has the potential to produce standardized, accurate x-ray reports in a timely manner that are easy to comprehend in the clinical setting.

In this presentation, Dr. Vasantha Kumar Venugopal, a radiologist at the Institute of Brain and Spine in New Delhi, will show how he and fellow researchers evaluated the proficiency of AI-generated chest x-rays compared to radiologist-generated clinical reports.

Approximately 300 chest x-rays were performed on a conventional x-ray system retrofitted with digital radiography. The anonymized chest x-ray images were analyzed by a deep learning-based chest x-ray algorithm designed to detect abnormalities and automatically generate clinical reports.

The algorithm features a combination of multiple classification, detection, and segmentation neural networks designed to identify 75 different radiological findings and determine the location of the abnormalities. The networks themselves were trained using approximately 1 million chest x-rays from multiple data sources.

In the study, the algorithm-generated reports were deemed as accurate as the radiologists’ reports in 79% of cases. In 5% of the cases, the algorithm produced reports that were more accurate or more clinically appropriate than those of the radiologists.

Conversely, the algorithm-created reports had significant diagnostic errors in 6% of cases, while in 9%, the algorithm created reports that were found to be clinically inappropriate or insufficient, even though the significant findings were correctly identified and localized.

The reports automatically generated by the algorithm resulted in “good comparability” with “high accuracy, paving the way for a new potential deployment strategy of AI in radiology,” the group concluded.

https://www.auntminnie.com/index.aspx?sec=road&sub=def&pag=dis&ItemID=127233

AI doesn’t hedge when reporting on chest x-rays

AI Doesn’t Hedge When Reporting On Chest X-Rays

By Erik L. Ridley, AuntMinnie staff writer

In this presentation, researchers from India will describe how a deep-learning algorithm shows potential for producing accurate reports of chest radiographs.

The researchers set out to study the effects of artificial intelligence (AI) for reducing hedging in radiology reports, according to presenter Dr. Vasanth Venugopal of Mahajan Imaging in New Delhi.

“We choose to investigate this area as we are encountering serious deficiencies in patient management due to defensive practice where the clinical opinions are influenced by perceived legal threats,” Venugopal told AuntMinnie.com.

Using a commercial deep-learning algorithm (ChestEye, Oxipit) on nearly 300 chest radiographs acquired at their institution, the researchers found that the software generated reports that were as accurate as the radiologists’ reports in nearly 80% of the cases and were more accurate in 5% of the exams.

“We found that AI can help significantly in generating valid, clear reports without hedging and biases,” Venugopal said.

However, the researchers also found significant diagnostic errors in 6% of the AI-generated reports cases. What’s more, in the remaining 9% of exams, the AI-generated reports were deemed to be clinically inappropriate or insufficient even though they had correctly identified and localized the significant findings, according to the researchers.

Attend this Tuesday talk to learn more.

This paper received a Roadie 2019 award for the most popular abstract by page views in this Road to RSNA section.

https://www.auntminnie.com/index.aspx?sec=road&sub=def&pag=dis&ItemID=127013