A What, When and Where Guide on Open Source DICOM viewers for Radiologists.

A What, When And Where Guide On Open Source DICOM Viewers For Radiologists.

LEARNING OBJECTIVES:


The learning objectives of this exhibit are:

1. To know the different open- source Dicom viewers for different platforms(mobile, desktop and tablets)

2. To discuss the pros and cons of these viewers and identify the viewer with the most diverse features for each platform.



BACKGROUND:


There are a lot of Dicom viewers in the market and also available free (open source) online. They have different types of User Interfaces, features, etc. Most of them can be used as a mini PACS as well. They are compatible with Windows, MAC OSX, Android, IOS, and even with Web Browsers. As it might be difficult for an average radiologist, we hope to help in identifying the Dicom viewer most suited for his needs.



FINDINGS AND PROCEDURE DETAILS


The easily accessible and user-friendly viewers across different hardware platforms are as enumerated below:



Desktop application


1. Microdicom Viewer (Free)- It is equipped with the most common tools for viewing the DICOM images. It is free and accessible for everyone but non-commercial use only. It supports Dicom images without any compression and also with JPEG Lossy, Lossless, etc. It also opens images in jpeg, png, tiff, etc. Drag and drop of studies can also be done to view. Encapsulated PDF\’s and structured reports are supported. This viewer can convert dicom to multiple image and video formats like Jpeg, Tiff, Png, WMV, AVI, etc. It can be used to view the dicom images without installation. It has basic image manipulation tools like zoom, pan, measurements, brightness and contrast. It is supported in Windows only.



2. Radiant Dicom Viewer (Free Trial)- It is small in size but is very quick and powerful as it runs in the best hardware environment but also works in an environment with 512MB ram. It has all the necessary tools a radiologist might want like zooming and panning, rotating, windowing like Lung and Bone, Brightness and contrast, Multiplanar Reconstructions,3D Reconstructions, etc. It has annotation tools like freehand Roi also. It supports almost all modalities like DX, CR, MR, CT, PT, MG, etc. It can open Dicom images directly from Patient CDs or any Folder. It has tools to convert Dicom images to Jpegs or WMV. It has the ability to directly Query/Retrieve PACS from this viewer. In Windows having touch support, offers multi-touch gestures. It is only available on Windows platform.



3. Sante Dicom viewer (Free Trial)- It also has almost all the features above two possess. It is available in both Windows and MAC OSX. It also features a mini PACS system that has two versions having different database options i.e., SQLite3 and PostgreSQL. It has support for Dual monitor setup and reporting. It has a built-in Data anonymizer and also has a Dicom print function. It needs a minimum of 2GB of ram to run perfectly.



4. Navegatium Dicom Viewer (Free)- It is a viewer which has full touch support and is available on the Windows store. It has features like 3D Reconstructions, MPR, etc. It also has support for 3D printing. This needs a minimum of 4GB of RAM to work. It has a mini PACS system and a reporting module.



5. OSIRIX Lite or Horos Dicom Viewer (Free but with limited features) – Only created for Mac OSX, is an open-source viewer and PACS software. It has a lot of segmentation tools in addition to the features stated above. Some of these tools need a paid license of the software for usage. It still can be used for non-commercial uses.



Web browsers


1. Orthanc (Free, Open Source) – It is an open-source PACS and Dicom viewer with multiple image storage options. It can store its data in Databases like SQLite3, PostgreSQL, MongoDB, and MySQL. It offers all PACS abilities with Query/Retrieving other PACS systems. It has its own viewer and one by a third party company \”OSIMIS\”.



2. OHIF (Free, Open Source) – It is also an open-source PACS and Dicom viewer. It has its own Database and also can be integrated with other open-source PACS systems like Orthanc, etc. Its viewer is based on Tool cornerstone which is written in \”react\” programming language and can be edited for your use.



3. DWV Dicom viewer (Free, Open Source) – It is a lightweight viewer based on JavaScript framework and HTML5. It can be integrated with other programming languages also. It has basic features like zooming, panning, windowing, annotation, etc.



Mobile Applications



1. mRay-Dicom Viewer (Free) – This viewer works on android and IOS phones and is available on respective stores. This is CE-Certified. It can be connected to any PACS system but needs a paid service. It has a user-friendly interface with MPR functionality. It supports all major modalities.



2. DroidRender (Free) – A 3D viewer that is used to view locally stored Dicom files. It has features like 2D/3D reconstructions also.



3. Medfilm (Free) – This viewer works on IOS devices. It is just a basic Dicom viewer with the ability to connect to any type of PACS system.



4. simplyDICOM Viewer (Free) – This viewer can be used to view locally stored Dicom images on an android phone. It has features like zoom, brightness, contrast, etc. but no annotation or MPR or connection with PACS.



5. Port-Ray Dicom Viewer (Free) – This viewer provides high speed and good connection to PACS. It includes tools like zooming, panning, brightness, contrast, annotation, MPR, etc. Option to encrypt the data is available to meet HIPAA or GDPR compliance. It is also available in the Google Play Store.



CONCLUSION: 


We discuss the pros and cons of the different DICOM viewers in this poster.



The poster can be viewed here: http://dx.doi.org/10.26044/ecr2020/C-14260

HIPAA, GDPR and Best Practice Guidelines for preserving data security and privacy – What Radiologists should know.

HIPAA, GDPR And Best Practice Guidelines For Preserving Data Security And Privacy – What Radiologists Should Know.

LEARNING OBJECTIVES:


At the end of this exhibit, the reader is expected to learn the following things:

  • To understand the key aspects of HIPAA and GDPR Compliance from radiology perspective.
  • To understand the key differences between these regulations
  • To know the best practice principles to be followed while handling and using medical data from radiology images


  • BACKGROUND:


    With the recent advances in deep learning and the boom of AI applications focusing on radiology, the medical imaging data has become a key resource for scientific progress. The large hospitals and imaging clinics have a plethora of such data, albeit unstructured but with accompanying clinical and other healthcare information. As much as it is crucial to use these data to build robust algorithms, it is also important to be careful of the critical privacy risks associated with sharing such data. In this back ground, it is important to understand the key features of the regulations governing these data.



    FINDINGS AND PROCEDURE DETAILS:


    1. HIPAA Compliance


    Health Insurance Portability and Accountability Act (HIPAA) sets the basic standards towards the protection of sensitive patient data. This rule applies to individuals or organizations that get health information in the course of normal health practices. The covered entities include Hospitals, Health Plans, and other Healthcare Providers like radiology centers, etc. Health Plans are organizations that provide medical care or at least pay for them such as insurers etc. This rule protects all personally identifiable information of a patient. This information includes the patient\’s demographic information, past health records, etc. This rule does not apply if the information is deidentified according to the rule.

    This privacy rule allows revealing of patient data without his/her authorization only when some conditions are met:

    1. To agencies for health oversight activities like audits, etc.

    2. To law enforcement agencies.

    3. For any court proceedings, if requested.

    4. To business associates, if a proper agreement is signed stating he/she will not reveal this data.

    2. GDPR Compliance


    General Data Protection Regulation (GDPR) is enforced by the European Union after 2016, and after May 25, 2018, all organizations must be compliant. It is a new framework for data protection laws that started in 1995. It changes the way companies should store and transfer personal data of EU citizens and residents. It includes personal data that can be used as an identifier as name, identification number, location data, etc. This rule applies to all organizations (other than healthcare entities also) doing any type of processing or holding data of EU citizens, regardless of location. Individuals are given the right that they can restrict their data to be further processed and also can request data deletion(right to be forgotten). Also, this rule allows individuals to have easy access to their data which companies hold about them. The companies covered by this rule are responsible for processing and handling people\’s data. In recent years, there have been massive data breaches including social media accounts, healthcare data storages (PACS), etc. Any type of data breach, loss, destruction has to be reported to the country\’s data regulator within 72 hours of the incident under this rule. If an organization doesn\’t handle data correctly, it will be fined. These fines normally go up to 10 million euros or 2 percent of the organization\’s turnover. Sometimes, they go up to 20 million euros or 4 percent of turnover too.



    3. HIPAA vs GDPR


    The major differences between HIPAA and GDPR are:



    a) Under HIPAA, organizations can disclose patient data to another provider in some circumstances without consent. In GDPR, no patient data can go out of the organization\’s premises without the consent of the EU citizen or resident.



    b) GDPR gives EU citizens or residents right stating that under any specific circumstances, they can tell a healthcare provider to erase their data, where HIPAA does not give this right.



    c) Under a data breach, healthcare providers are required to notify affected subjects if following HIPAA, and if more than 500 subjects are affected, the Department of health and human services needs to be informed. In GDPR, there is a 72-hour window to report this data breach to a superior authority.



    d) Both the rules permit disclosure or processing of PHI (Personal Health Information) whenever necessary of an individual who is unable to give consent due to incapability.



    e) GDPR permits the processing of data to any not-for-profit organization only if this processing relates to the individual\’s personal family and not to any third party. HIPAA does not have this type of provision.



    f) Both rules permit disclosure of data when needed in any court acting in their judicial capacity.



    4. Tags need to be Anonymized before using for research purposes



    To use imaging data outside the healthcare provider\’s premises for any use such as research & deep learning, patient data needs to be removed or anonymized from images. Some tags that need to be anonymized which contains the patient and location data are as shown in figure.



    It is recommended that before using these images, consent should be taken that those images (Unidentified) can be used for research.



    Almost all major PACS companies and some open-source and proprietary tools offer anonymization tools with customizable tags. Also, some programming languages like Python can be used.



    CONCLUSION:


    Major regulations which set the rules to secure Patient Data has been discussed here. Also how to use imaging data for research and AI has also been discussed.



    The poster can be viewed here: 

    http://dx.doi.org/10.26044/ecr2020/C-13220

Spleen or liver? – Prospective study to evaluate the role of Splenic and Hepatic Shear Wave elastography for evaluation of portal hyperdynamic circulation.

Spleen Or Liver? – Prospective Study To Evaluate The Role Of Splenic And Hepatic Shear Wave Elastography For Evaluation Of Portal Hyperdynamic Circulation.

PURPOSE:

In advanced fibrosis, liver stiffness measurements were shown to have no predictive potential for the severity of portal hypertension. We propose to test if an additional evaluation of splenic stiffness could improve the non-invasive prediction of portal hyperdynamic circulation. To compare the role of the liver (LSM) and splenic stiffness measurements (SSM) measured by shear wave elastography (SWE) in predicting the presence of esophagogastric varices (EGV) in patients with portal hypertension and to determine the correlation between the SSM and endoscopic grade of EGV.



METHODS AND MATERIALS:

  • This study included 40 patients with chronic liver disease being evaluated for portal hypertension and planned for esophagogastroduodenoscopy.
  • To measure liver stiffness, the region of interest is positioned in an area inthe right lobe free of vessels and bile ducts, at least 1.5 cm below the liver capsule. For spleen stiffness, the ROI is placed in the parenchyma of the lower pole at least 1 cm below the spleen capsule.
  • The shear wave liver stiffness (in kPa) was recorded at ten locations and the median values calculated. Endoscopic findings were interpreted with reference to the presence of varices and grade of the varices. Correlation between SSM, LSM, and grade of varices was analyzed with the Pearson correlation coefficient. Multiclass Receiver operating characteristic (ROC) curves were constructed, and the area under the ROC curves (AUC) was calculated to determine the discriminating power between the grades of the varices.


  • RESULTS:

    LSM and variceal grade showed weak positive linear correlation (R = 0.36076, P <. 0001) whereas the SSM and variceal grade showed even weaker positive linear correlation (R = 0.20, P <. 0001). The AUC for the detection of varices was 0.77 for LSM and 0.63 for SSM respectively.



    CONCLUSION:

    Our results are against the conventional knowledge of the significant positive correlation between LSM, SSM and variceal grading.


    The poster can be viewed here: http://dx.doi.org/10.26044/ecr2020/C-14820

    Acceleration of cerebrospinal fluid flow quantification using Compressed-SENSE: A quantitative comparison with standard acceleration techniques

    Acceleration Of Cerebrospinal Fluid Flow Quantification Using Compressed-SENSE: A Quantitative Comparison With Standard Acceleration Techniques

    PURPOSE:


    CSF quantification study is typically useful in pediatric and elderly population for normal pressure hydrocephalus (NPH). In these population, scan time reduction is particularly useful for patient cooperation and comfort. The potential for CS to accelerate MRI acquisition without hampering image quality will increase patient comfort and compliance in CSF quantification. The purpose of this study is to quantitatively evaluate the impact of Compressed-SENSE (CS), the latest image acceleration technique that combines compressed sensing with parallel imaging (or SENSE), on acquisition time and image quality in MR imaging of the Cerebrospinal fluid quantification study.



    METHODS AND MATERIALS:


    Standard in-practice CSF quantification study includes a 2D-gradient echo sequence for flow visualization and 2D-gradient echo T1 weighted phase-contrast sequence for flow quantification. Both these sequences were pulse gated using PPU triggering, planned perpendicular to the mid-aqueduct. Both these sequences were modified to obtain higher acceleration with CS (Table 1). Ten volunteers were scanned both, with and without CS, on a 3.0 T wide-bore MRI (Ingenia, Philips Health Systems). The study was approved by the IRB. The flow quantification was done using IntelliSpace Portal, version 9, Q-Flow analysis package (Philips Health Systems). Absolute stroke volume, mean velocity and regurgitant fraction were calculated for flow-quantification sequence with and without CS. Correlation between these three parameters for CS protocol and non-CS protocol were statistically evaluated using Spearman’s rank correlation test.



    CONCLUSION:


    There is no significant difference in image quality between the current standard of care and CS-based accelerated CSF quantification MRI scans. Compressed-SENSE in this segment can reliably replace the existing scan protocol of higher acquisition time without loss in image quality, quantifications and at the same time with a significant reduction in scan time. The compressed-SENSE technique was originally designed for scan time acceleration of qualitative MRI . In this work, CS proves to have the potential of being extended to quantitative MRI without any significant information loss and 44% scan time reduction.



    The EPOS can be viewed here: http://dx.doi.org/10.26044/ecr2020/C-05874

    Automated Vertebral labeling and Quantification of Spondylotic Metrics of MR Lumbar Spine using neural networks – A Retrospective Validation Study on Advanced Analytics Platform.

    Purpose:


    MRI of the spine is one of the commonest studies performed in clinical practice usually to study the cause of back pain. The reading of spine MR studies involves identifying the vertebral levels, estimating stenosis of the spinal canal, detecting the reduction in the vertebral height and also identifying spondylolisthesis. Several studies have shown that deep learning can assist in some of these tasks by automating them. In this study, we propose to use a platform-based approach for quick validation of a CNN based multi-modular set of custom Neural Networks that can automatically label the vertebral level and measure the central canal diameter, vertebral height, alignment, and disc height.



    Methods and materials:


    The algorithm \”Spindle\” is a set of neural networks (developed by Synapsica technologies Pvt Ltd) that were trained on 11,321 Spinal MRIs acquired on different field strengths. The independent validation was done on 157 cases of lumbar spine MRI from 1.5 T and 3 T MRI scanners. The results of the algorithm were loaded on CARPL (CARING Analytics PLatform) and the spinal canal diameters at all five lumbar levels were plotted on Dot Plots with an adjustable threshold. Expert validation was done by a neuroradiologist with 9 years\’ experience. All levels where the network had an output of central canal diameter below 11mm were measured. The accuracy of vertebral labeling and listhesis detection was also verified.



    Results:


    28 Slices with rotated images in the scans were excluded from the study. Out of remaining cases, there were a total of 49 slices fulfilling the threshold from 40 cases where the radiologist measured the spinal canal diameter. The mean percentage error of AI prediction of spinal canal diameter was 7.9%. There were a total of 16 spondylolisthetic levels, of which 12 were accurate. In two cases, there was correct identification but the defects were over graded. The vertebral labeling was accurate at all levels in these 40 cases.



    Conclusion:


    The \”Spindle\” algorithm shows high accuracy in automatically labeling and quantifying several metrics of spondylosis in this retrospective study. Such automated solutions can enable faster reading and better quantification of imaging findings on MR scans.



    The poster can be viewed here: http://dx.doi.org/10.26044/ecr2020/C-14356

    Establishing Normative Liver Volume and Attenuation for the Population of a Large Developing Country Using Deep Learning – A Retrospective Study of 1,800 CT Scans

    Establishing Normative Liver Volume And Attenuation For The Population Of A Large Developing Country Using Deep Learning – A Retrospective Study Of 1,800 CT Scans

    PURPOSE:


    Deep learning has enabled the analysis of large datasets which previously required significant manual labour. We used a deep learning algorithm to study the distribution of liver volumes and attenuation of a massive dataset of ~1,800 non-contrast CTs (NCCTs) of the abdomen. Specifically, we aim to establish the normative values of hepatic volume and attenuation in patients with no known pathologies and understand their correlations with age and sex. Using hepatic attenuation as an imaging biomarker, we also investigate the prevalence of fatty liver disease at the study site and compare with known prevalence rates.



    METHODS AND MATERIALS:


    Abdomen CTs acquired retrospectively from the last 3 years were used for the study. Natural Language Processing (NLP) algorithms were developed to identify patients whose radiology reports did not indicate any pathologies of the liver. Non-contrast abdomen CT of the same patients was extracted from the PACS and processed using deep learning models to obtain the liver volume (LV) and mean liver attenuation (MLA).



    Liver volume (LV) and mean liver attenuation (MLA) were estimated using a deep learning-based segmentation model which accurately identifies liver voxels from the CT scan, and subsequently calculates LV and MLA. The deep learning algorithm used a multi-stage 3D U-Net architecture (Fig 1) and was trained on 527 patient images that were manually annotated by an expert radiologist. By leveraging two resizing parameters, the multi-stage architecture first extracts the region of interest of the liver which is then used for fine boundary delineation by the subsequent model. This approach helps reduce false positives from neighbouring regions such as spleen and stomach. The algorithm was tested independently on 130 CTs from LiTS challenge and gave a dice score of 95%, and mean volume error of 3.8%. Representative segmentations are shown in Fig 2.



    The patient images were anonymized and processed on workstations equipped with Nvidia GeForce GTX 1070 GPUs with 8GB of graphic memory. Each study took 7-10mins to process given the large size of the imaging data, and the entire process was completed in 7 days using multiple workstations.



    Additional patient information such as sex and age were obtained from the clinical records and collated with the obtained liver volume and mean liver attenuation for the final analysis. Appropriate statistical analysis (correlations, histogram etc.) were performed on the LV, MLA and estimated prevalence of fatty liver was calculated using a cut-off of 40HU as the reference standard.



    RESULTS:


    1,823 NCCTs of the abdomen with no liver or related abnormality on clinical reports were extracted from the PACS system. 107 (6%) NCCTs failed the algorithm’s quality check and were excluded from the study, resulting in a total of 1,715 NCCTs for the analysis. The demographic distribution of age and gender were available for 1626 patients. There were 775 males and 851 females, with a mean age of 44.4yrs. The average liver volume (LV) was 1,389mL (Standard Deviation: 473mL, Range: 201 – 3946mL), while the average mean liver attenuation was 59.2HU (Standard Deviation: 15.9HU, Range: 24.2 – 125.6HU). (Fig 3). There was no strong correlation between volume and age for both men (R2: 0.002) and women (R2: 0.0001).122 of 1715 (59% males and 41% females) had fatty liver defined as mean liver attenuation less than 40HU. Over 80% of patients with mean liver attenuation less than 40HU were in the 35-75 age group, with 27.2% aged between 55-65yrs. (Fig 4).



    CONCLUSION:


    Automated analysis using deep learning algorithms can help parse through massive datasets automatically and shed light on important clinical questions such as the establishment of age- and sex-correlated normative values. We establish new normative values for LV and MLA and quantify the prevalence of fatty liver.



    The EPOS can be viewed here: http://dx.doi.org/10.26044/ecr2020/C-07653

    Six must-have features for advanced analytics and visualization platform for validating new-age AI algorithms

    Six Must-Have Features For Advanced Analytics And Visualization Platform For Validating New-Age AI Algorithms

    LEARNING OBJECTIVES:

    In this exhibit, we discuss the six key must-have features on any analytical platform that is intended for the validation of AI algorithms. With the recent developments in machine learning and especially deep learning, a lot of companies are trying to develop solutions for assisting radiologists in medical imaging. We have developed a system that combines statistics with medical inputs to provide insights and validate deep learning algorithms at scale. One of the key challenges is the variety of output from these algorithms. The output could be a binary variable if the model is trying to predict whether the patient is suffering from a disease or not. Some algorithms, for example, predicting the nodule size, have a continuous variable as the output. Other algorithms can have even more complex outputs like the 3D boundary of intracranial hemorrhage. Our system presents the data to data scientists and medical practitioners in the simplest form possible with all the important insights. In addition to this, an integrated arbitration tool helps in validating the output with just a few clicks.



    BACKGROUND:

    We believe that the usage of such tools will decrease the time required for validating the deep learning algorithms in healthcare setup and at the same time, will provide useful insights to the companies which will help them in improving the algorithm further.



    FINDINGS AND PROCEDURE DETAILS:

    Ability to fetch data from PACS: To conduct a study, the hospital/clinic should be able to easily search and extract cases. The tool should have features to filter cases on the basis of modalities, diseases, and other related fields. In addition to these, advanced features like semantic search can be really useful to capture the diversity of diseases and modalities. Our system provides features to include/exclude modalities and diseases.



    Client-side anonymization: Data privacy and security is an important aspect of any validation study. The system should have the ability to anonymize DICOM images on the browser side only. Uploading files on the cloud system and then anonymization could have serious implications in terms of data privacy.



    Cloud-Based computation: To conduct the studies at scale, it is essential to have a cloud-based deployment where the configuration of the system could be increased in real-time depending on the usage. Apart from computational flexibility, it also provides wider accessibility. Anyone with internet can access the system and conduct the study. As soon as the arbitrator uploads DICOM images, the processing of the cases starts automatically on our cloud-based system.



    Visualization: To present the data in the simplest and most meaningful form is the biggest challenge for any study. With a wide variety of outputs(for example, binary variable, a continuous variable, masking areas, etc), it is essential to present the data in a form that can help arbitrator to gauge the model accuracy in the best possible way. For example, for chest x-ray algorithms, our system provides the following scatter plot. Different colors and the number of cases of each type(Abnormal, Normal, Mismatch and Not Reported) help the arbitrator in understanding the crux in a single look. Apart from the plot, the ability to change the threshold provides an ability to the arbitrator to test the algorithm at different thresholds in real-time. ROC/AUC curve plays a key role in deciding the threshold for algorithms with binary output.



    Arbitration: For the cases where there is a mismatch between radiologists (ground truth) and algorithm’s output, it is essential to have a third-eye looking at the data to minimize any human error. The tool should have an interface where the arbitrator can see the ground truth and algorithm’s output and can act as a moderator. Ideally, a system should have a DICOM viewer integrated with an ability to input the arbitrator’s feedback.
    Summary Report: Once the arbitration process is done, the system should generate a summary of the algorithm’s performance on a set of parameters. These parameters can vary depending on the type of modality and the study.



    CONCLUSION:

    Once the arbitration process is done, the system should generate a summary of the algorithm’s performance on a set of parameters. These parameters can vary depending on the type of modality and the study.


    The poster can be viewed here: http://dx.doi.org/10.26044/ecr2020/C-14851

    DICE Score vs Radiologist – Visual quantification of Virtual Diffusion Sequences – pitfalls of lesion segmentation-based approach as compared to clinical relevance-based qualitative assessment

    DICE Score Vs Radiologist – Visual Quantification Of Virtual Diffusion Sequences – Pitfalls Of Lesion Segmentation-Based Approach As Compared To Clinical Relevance-Based Qualitative Assessment

    PURPOSE:

    The performances of image segmentation/translation algorithms are typically evaluated by measuring image similarity metrics like DICE score or SSIM. In some instances, this approach may be counter-productive. In this study, we propose to compare such an approach with more clinical relevance focussed qualitative assessment method for estimating the accuracy of a virtually generated diffusion-weighted (DW) sequences using Generative Adversarial Networks (GAN).



    METHODS AND MATERIALS:

    we used a previously described Virtual Imaging Using Generative Adversarial Networks for Image Translation (VIGANIT) network which comprises a 15-layer deep convolution neural network (CNN) used in conjunction with a GAN to improve the clarity of the output image. VIGANIT was used to predict B1000 diffusion-weighted image from input T2W images in 24 cases (12 cases of acute and chronic infarcts each). The ground truth B1000 DW and the predicted B1000 images were blinded and randomized. A radiologist with 9 years’ experience in MRI did pixel-level annotations of the bright and dark areas on ITK- SNAP. Dice score coefficients (DSC) for the annotated areas were calculated. Another radiologist with 16 years’ experience studied the scans to determine the scan level presence or absence of restriction like signal. In positive cases, slice level analysis for the number and location of discretely visible ischemic foci of size greater than 2 mm were also noted.



    RESULTS:

    The DICE score for the cases with acute infarcts ranged 0 to 0.85 with an average of 0.43 and the dark areas ranged from 0.27 to 0.81 with an average of 0.46. The qualitative assessment revealed that eight out of the 12 cases had positive scan level predictions of restricted diffusion. None of the 12 chronic infarct cases had false predictions of restricted diffusion. There was an absence of comparable predictions in 4 out of the 12 cases with acute infarcts. Two of these four patients had some degree of movement artifacts in their T2W images. The overall accuracy of the predictions was 72%.



    CONCLUSION:

    Despite the low dice score co-efficient for image translation, the scan level accuracy for the clinical classification of presence or absence of acute infarct was reasonably good. This study makes the case for additionally employing clinical-significance of lesions as an indicator of model performance.
    In this study, we demonstrate a significant change in the acceptability score of an image translation network by applying a more clinically relevant assessment method as compared to in-silico mathematical methods.



    The EPOS can be viewed here: http://dx.doi.org/10.26044/ecr2020/C-06645

    Are orthopaedic surgeons as good as radiologists at detecting glenoid labral lesions? A comparative study

    Are Orthopaedic Surgeons As Good As Radiologists At Detecting Glenoid Labral Lesions? A Comparative Study

    PURPOSE:

    In the current era of information age, medical education and training has undergone a rapid shift in the style and structure. Radiology training, due to its nature of complete digitization of information is in the forefront of this shift. This change offers unique opportunity to educate radiologists as well as clinicians in a practical way to enhance their practice. In this work, we propose to study if it possible to impart capsule training to general radiologists and clinicians to improve their interpretation skills in specific pathologies like glenoid labral tears. We also propose to compare the performance of a general radiologist with a shoulder surgeon in detecting glenoid labral tears.



    METHODS AND MATERIALS:

    Seventy-two cases of MRI of the shoulder (Axial, Coronal & Sagittal FSPD, Sagittal T1) with diagnosis of glenoid labral tear on clinical report were extracted from PACS, anonymised and randomised. A sub-specialist musculoskeletal radiologist independently read the scans to establish ground truth for the comparative study. A shoulder surgeon and a general radiologist, each with 12 years’ experience in their respective fields were given a capsule course of training on imaging findings of glenoid labral tears with cases including all common variants. The training cases didn’t include any of the cases from the test set. After the training, they independtly read the test scans and reported presence or absence of labral tears in 4 quadrant (anterosuperior, posterosuperior, anteroinferior, posteroinferior). The results were compared with the ground truth and percentage observed agreement with ground truth was calculated.



    RESULTS:

    Out of the 72 cases, 5 cases had 270º tears, 24 cases had anteroinferior glenoid labral tears,14 had posteroinferior tears, 25 had posterosuperior tears and two had anterosuperior tears. The percentage observed agreement for SLAP (posterosuperior) and Bankart’s lesion (anteroinferior) was 58% and 72% for the radiologist and 46% and 58% for the orthopaedic surgeon. The percentage agreement for anterosuperior and posteroinferior labrum was similar for the radiologist (61% and 69%) and the orthopaedic surgeon (60% and 69%).



    CONCLUSION:

    Surprisingly, the general radiologist performed better than the sub-specialist orthopaedic surgeon paving the way for training programs in MRI reading for orthopaedic surgeons. Note that our sample size of readers was very small, and the study should be replicated with a larger number of readers. Many orthopaedic surgeons, especially in the developing world, where there is a paucity of radiologists, choose to read their scans themselves. We demonstrate that as far as possible an opinion from a radiologist should be obtained.



    The EPOS can be viewed here: http://dx.doi.org/10.26044/ecr2020/C-13084

    Evaluating radiologists’ knowledge of MRI safety – a questionnaire-based survey of practicing radiologists

    Evaluating Radiologists’ Knowledge Of MRI Safety – A Questionnaire-Based Survey Of Practicing Radiologists

    PURPOSE:

    There is a lot of confusion about whether to scan patients with different types of metal in their bodies, to the extent that patients are often refused MRI scans. In order to assess the scope of the problem and allow targeted designing of educational programs, we conducted a survey on MRI safety amongst radiologists from varying geographies.



    METHODS AND MATERIALS:

    An anonymous questionnaire with 11 clinical situations was circulated digitally amongst ~5,000 radiologists. Questions comprised of MRI scanning dilemmas faced by us in real practice, where an MRI was eventually performed after extensive literature search. Situations included those related to total knee replacements (TKR), VP shunts, bullet injuries, shrapnel injuries, tattoos, baclofen pumps, intra-uterine devices (IUD), sternal wires, coronary stents and cardiac valves. Responses were scored and appropriate analysis performed.



    CONCLUSION:

    We note a trend wherein radiologists seem to adopt a conservative approach and avoid MRI in situations where it can be safely performed, denying patients optimum care. Radiologists need to be systematically educated about situations where MRI can and cannot be done in a clinical setting – situational education being one approach. We demonstrate the need for conducting dedicated training programs on the safety of MRI for practicing radiologists. Many patients who are denied critical MRI scans can benefit from such programs.



    The EPOS can be viewed here: http://dx.doi.org/10.26044/ecr2020/C-04729