Enhanced separation of brain tumors and edema via diffusion tensor distribution imaging: Illustration with lymphoma cases

Enhanced Separation Of Brain Tumors And Edema Via Diffusion Tensor Distribution Imaging: Illustration With Lymphoma Cases


To investigate the clinical potential of diffusion tensor distribution imaging (DTD) for visually differentiating brain tumors and edema from healthy tissue non-invasively.


Multidimensional diffusion (MDD) MRI images were acquired in 2 lymphoma patients on a 3T Discovery 750w system (GE Healthcare) with a 32-channel head coil. Prototype spin-echo EPI sequences were performed using the following parameters: TR/TE=3298/121 ms, in-plane resolution = 3×3 mm2. MDD consisted of 43 linear and 37 spherical b-tensors at b = 100, 700, 1400, 2000 s/mm2. Total scan time was ~5 min. Post-processing of the data was done using dVIEWR powered by MICE Toolkit (www.dviewr.com). The main features related to average cell density (mean diffusivity, MD) and cell elongation (microscopic anisotropy) can be computed within “bins” corresponding to specific tissue types, i.e., “thin” for elongated cells (e.g., white matter), “thick” for densely packed round cells (e.g., grey matter), “sparse” for low cell-density diffusion environments (e.g., edema) and “big” for free water (e.g., ventricles).


Bin-resolved segmentation maps (SegM) facilitate the identification of edematous regions, captured by the sparse bin (red areas in SegM). These regions surround the investigated lymphomas, themselves mostly captured by the thin bin (green in SegM), indicating that they consist of elongated cells. These cells are randomly oriented, as they appear white (red+green+blue) in the thin-bin mean-orientation maps (see Figure 1). The bin-resolved MD maps’ colors highlight the inverse relationship between MD and average cell density across different tissue types. In particular, the sparse bin exhibits an intermediate MD characteristic of edema.

Figure 1. Diffusion Tensor Distribution (DTD) parameter maps of lymphoma cases.


DTD could provide enhanced visualization tools for radiologists aiming to better separate/characterize healthy and pathological tissues non-invasively.


This pilot study had limitation in terms of small sample size.

Implementation Of Fast Echo-planar Imaging (EPImix) MRI Sequence For Scan Time Reduction In Critical And Unco-operative Patients

Implementation Of Fast Echo-Planar Imaging (EPImix) MRI Sequence For Scan Time Reduction In Critical And Unco-Operative Patients​


To detail how a fast multi-contrast Echo planar Image mix (EPIMix) MRI sequence can lead to a successful reduction in scan time in critical and uncooperative patients compared to the routine clinical brain imaging without compromising the adequate image quality and diagnosis.


A prospective pilot study was conducted on 29 patients requiring emergent brain imaging for concerns of stroke(3), tremors(2), slurring of speech(3), headache(6), memory loss(4), imbalance(2), limb weakness(6), aphasia(1), dementia(1) and Parkinson\’s disease(1) using EPIMix brain imaging sequence on the Discovery 750w 3T, GE Healthcare MR system. EPIMix brain MRI consisting of six contrasts (T2*, T1/T2-FLAIR, T2, DWI, ADC) was acquired in 72-75 seconds. Routine T1w/T2w axial, coronal FLAIR and T2w sagittal images were also concurrently acquired and were correlated with EPIMix images for all the patients. Qualitative analysis of the EPIMix scans was performed by two experienced radiologists for assessment of diagnostic accuracy, artifacts, and image quality.


The image quality was diagnostic in all of these cases (100%) and the diagnostic performance was comparable between EPIMix and routine clinical MRI without much significant difference, indicating the preservation of adequate image quality on fast EPIMix scans (see Fig.1).

Fig 1. (i) 74-year-old male presented with a history of slurred speech. There is a chronic infarct with gliosis in the right parietal region. The internal content shows hyperintensity on T2WI (A) and T2-FLAIR (B), and hypointensity on T1-FLAIR (D)(arrows); no diffusion restriction on DWI (F) is seen (arrows). (ii) 71-year old male presented with a history of upper limb tremors. Hyperintensity in the right frontal periventricular white matter is seen on T2WI (G) and T2-FLAIR (H) and hypointensity on T1-FLAIR (J)(arrows) with reduced size of the frontal horn, possibly due to ependymitis granularis; no diffusion restriction on DWI is seen to suggest acute ischaemia (L) (arrows). (iii) 82-year-old male presented with a clinical profile of stroke. Cortical & subcortical gliosis is seen in the left middle frontal gyrus. The internal content shows hyperintensity on T2WI (M) and T2-FLAIR (N), and hypointensity on T1-FLAIR (P)(arrows); no diffusion restriction on DWI (R) is seen(arrows).


The pilot study reveals that the EPIMix sequence with rapid scanning can minimize motion artifacts and can be used in unstable patients to evaluate a wide range of brain pathologies without compromising diagnostic image quality.


EPIMix produces six weighted MRI contrasts in a short time, albeit some image artifacts such as geometric distortion at the skull base and susceptibility artifacts, which were noticed in almost all EPIMix scans. Image degradation with the above-mentioned artifacts is the result of an inherent trade-off between scan time reduction and image quality.

Move Away HIPAA And GDPR, Here Comes CrypTFlow – Secure AI Inferencing Without Data Sharing

Move Away HIPAA And GDPR, Here Comes CrypTFlow – Secure AI Inferencing Without Data Sharing


  • Currently, running Artificial Intelligence (AI) algorithms on medical images requires either the sharing of medical images with developers of the algorithms, or sharing of the algorithms with the hospitals. Both these options are sub-optimal since there is always a real risk of patient privacy breach or of intellectual property theft.
  • Encryption is the process of converting data into a “secret code” using a “key” making the data meaningless for anyone without the key. The challenge is that, with current technology, the key needs to be shared with the AI developer, so that the data can be converted to its meaningful form, thereby compromising the security and privacy of the data.
  • We propose using CrypTFlow, which uses Multi-Party Computation and encryption to run AI algorithms on medical images without sharing the encryption key described above. This means that the images remain in the hospital network, the AI algorithm remains in the AI developer’s network, but the AI is still able to run on the images.
  • We will present the results of our experiments of running CheXpert, an AI algorithm, on Chest X-Rays.


  • Current privacy and intellectual property concerns with deploying AI algorithms in clinical practice • What is encryption? • What is multi-party computation (MPC)?
  • What is CrypTFlow and how can it help run AI algorithms without requiring data to be shared with AI developers?
  • Results of running CheXpert AI algorithm using CrypTFlow – accuracy, time and computation – The Future of secure AI deployment

The poster can be viewed here: CrypTFlow-Secure-AI-Inferencing

Clinical Experience Using Novel Multidimensional Diffusion Magnetic Resonance Imaging For Characterization Of Tissue Microstructure In Various Brain Pathologies

Clinical Experience Using Novel Multidimensional Diffusion Magnetic Resonance Imaging For Characterization Of Tissue Microstructure In Various Brain Pathologies


Multidimensional diffusion (MDD) MRI is a novel imaging technique that provides information enabling
better discrimination of the average rate, microscopic anisotropy, and orientation of diffusion within microscopic tissue environments. We share our experience in the evaluation of MDD’s clinical feasibility in various brain pathologies, where we employed Diffusion Tensor Distribution (DTD) imaging to retrieve nonparametric intravoxel DTDs. DTD allows separation of tissue-specific diffusion profiles of the main brain components, e.g., white matter, grey matter, cerebrospinal fluid and pathological tissue environments such as edema through so-called ‘bins’, namely the ‘thin’, ‘thick’, ‘big’, and the new fourth bin, ‘sparse’. Microscopic anisotropy is not confounded by cell alignment over the voxel scale, unlike conventional fractional anisotropy. Long processing times (a few hours) are needed to generate DTD maps. Current MDD sequences, albeit optimized, feature longer TE compared to conventional diffusion sequences. This imposes a lower image resolution (3×3 mm2) in order to maintain reasonable signal-to-noise ratio. Distortion artefacts could be corrected upon acquisition of a reverse phase-encoding b0 image (for ‘topup’ processing).


1. Basic physics underlying MDD MRI
2. Pros and cons of the sequence
3. Highlight key differential diagnostic points in different brain indications: infections – tuberculomas and cysticercosis, sudden onset of loss of balance, fits, radiation damage and seizures.

The poster can be viewed here: MDD_EE_poster

Can AI Help Read Pediatric Chest X-rays? An independent Evaluation on 3,000+ Scans

Can AI Help Read Pediatric Chest X-Rays? An Independent Evaluation On 3,000+ Scans


To evaluate the performance of a commercially available deep learning-based AI algorithm on pediatric chest X-rays (CXRs).


3,319 frontal (PA and AP) CXRs of patients’ aged 6 to 18 years were pulled from PACS and anonymised at a tertiary care pediatric hospital in Brazil. Labels (normal, abnormal) were ascertained from the radiology reports. The data was loaded on to CARPL AI Research platform (CARING Research, India) for AI inference and validation-related statistical analysis. The algorithm under test was QXR Version 3.0 (Qure.ai, India). The algorithmic output consisted of three categories – “normal”, “abnormal” and “to be read”. The “to be read” scans,
which refer to cases where the scans are meant to be read by a radiologist directly, were excluded from calculation of summary statistics. False negative scans were re-read by a specialized pediatric radiologist with 6 years of experience.


Out of the 3,319 cases, 1,802 were labeled as “to be read” and excluded from analysis. On the remaining 1,517 cases the algorithm gave a sensitivity of 91% and specificity of 96%. The 38 false negatives were reviewed and only 9 truly missed findings existed out of which 7 cases had consolidation, 1 had atelectasis and 1 had vascular engorgement.

Figure 1


Our independent evaluation provides evidence of AI’s ability to accurately read and triage normal pediatric CXRs thereby saving significant time and effort on part of radiologists.


Most AI algorithms are trained on adult data and hence have poor performance on pediatric cases where lack of trained radiologists is a constant problem, especially in the developing and underdeveloped world.

Initial Clinical Experiences with EPIMIX Sequence in Multiple Brain Pathologies

Initial Clinical Experiences With EPIMIX Sequence In Multiple Brain Pathologies


A new multi-contrast echo planar imaging sequence called EPIMIX has been described with a 72-75 second long
sequence providing a range of contrasts from T1 FLAIR, T2-weighted, T2-FLAIR, GRE T2*, Diffusion and ADC
images. We share our experience in a variety of brain conditions, where we employed EPIMIX in addition to
standard of care imaging. The best indications to use EPIMIX are sick or un co-operative patients needing faster
scan acquisition. This sequence runs out of the box, without any modifications necessary, with the capability to
increase the numbers of slices. Inbuilt MOCO (motion correction) aids in improving the image quality in uncooperative patients. Longer processing times are needed, ranging from 6-10 minutes after the scan. Lower
signal to noise ratio leads to increased image grain and poorer visualization of interfaces between lesions and
normal brain parenchyma


1. Basic physics behind the sequence
2. Contrasts generated from the sequence
3. Pros and Cons of the sequence
4. Clinical experience in different indications: Infarcts, Neoplasms, Headache with normal scans, White matter lesions Infections such as tuberculomas or cysticercosis.

The presentation can be viewed here: Initial Clinical Experiences with EPIMIX

A What, When and Where Guide on Open Source DICOM viewers for Radiologists.

A What, When And Where Guide On Open Source DICOM Viewers For Radiologists.


The learning objectives of this exhibit are:

1. To know the different open- source Dicom viewers for different platforms(mobile, desktop and tablets)

2. To discuss the pros and cons of these viewers and identify the viewer with the most diverse features for each platform.


There are a lot of Dicom viewers in the market and also available free (open source) online. They have different types of User Interfaces, features, etc. Most of them can be used as a mini PACS as well. They are compatible with Windows, MAC OSX, Android, IOS, and even with Web Browsers. As it might be difficult for an average radiologist, we hope to help in identifying the Dicom viewer most suited for his needs.


The easily accessible and user-friendly viewers across different hardware platforms are as enumerated below:

Desktop application

1. Microdicom Viewer (Free)- It is equipped with the most common tools for viewing the DICOM images. It is free and accessible for everyone but non-commercial use only. It supports Dicom images without any compression and also with JPEG Lossy, Lossless, etc. It also opens images in jpeg, png, tiff, etc. Drag and drop of studies can also be done to view. Encapsulated PDF\’s and structured reports are supported. This viewer can convert dicom to multiple image and video formats like Jpeg, Tiff, Png, WMV, AVI, etc. It can be used to view the dicom images without installation. It has basic image manipulation tools like zoom, pan, measurements, brightness and contrast. It is supported in Windows only.

2. Radiant Dicom Viewer (Free Trial)- It is small in size but is very quick and powerful as it runs in the best hardware environment but also works in an environment with 512MB ram. It has all the necessary tools a radiologist might want like zooming and panning, rotating, windowing like Lung and Bone, Brightness and contrast, Multiplanar Reconstructions,3D Reconstructions, etc. It has annotation tools like freehand Roi also. It supports almost all modalities like DX, CR, MR, CT, PT, MG, etc. It can open Dicom images directly from Patient CDs or any Folder. It has tools to convert Dicom images to Jpegs or WMV. It has the ability to directly Query/Retrieve PACS from this viewer. In Windows having touch support, offers multi-touch gestures. It is only available on Windows platform.

3. Sante Dicom viewer (Free Trial)- It also has almost all the features above two possess. It is available in both Windows and MAC OSX. It also features a mini PACS system that has two versions having different database options i.e., SQLite3 and PostgreSQL. It has support for Dual monitor setup and reporting. It has a built-in Data anonymizer and also has a Dicom print function. It needs a minimum of 2GB of ram to run perfectly.

4. Navegatium Dicom Viewer (Free)- It is a viewer which has full touch support and is available on the Windows store. It has features like 3D Reconstructions, MPR, etc. It also has support for 3D printing. This needs a minimum of 4GB of RAM to work. It has a mini PACS system and a reporting module.

5. OSIRIX Lite or Horos Dicom Viewer (Free but with limited features) – Only created for Mac OSX, is an open-source viewer and PACS software. It has a lot of segmentation tools in addition to the features stated above. Some of these tools need a paid license of the software for usage. It still can be used for non-commercial uses.

Web browsers

1. Orthanc (Free, Open Source) – It is an open-source PACS and Dicom viewer with multiple image storage options. It can store its data in Databases like SQLite3, PostgreSQL, MongoDB, and MySQL. It offers all PACS abilities with Query/Retrieving other PACS systems. It has its own viewer and one by a third party company \”OSIMIS\”.

2. OHIF (Free, Open Source) – It is also an open-source PACS and Dicom viewer. It has its own Database and also can be integrated with other open-source PACS systems like Orthanc, etc. Its viewer is based on Tool cornerstone which is written in \”react\” programming language and can be edited for your use.

3. DWV Dicom viewer (Free, Open Source) – It is a lightweight viewer based on JavaScript framework and HTML5. It can be integrated with other programming languages also. It has basic features like zooming, panning, windowing, annotation, etc.

Mobile Applications

1. mRay-Dicom Viewer (Free) – This viewer works on android and IOS phones and is available on respective stores. This is CE-Certified. It can be connected to any PACS system but needs a paid service. It has a user-friendly interface with MPR functionality. It supports all major modalities.

2. DroidRender (Free) – A 3D viewer that is used to view locally stored Dicom files. It has features like 2D/3D reconstructions also.

3. Medfilm (Free) – This viewer works on IOS devices. It is just a basic Dicom viewer with the ability to connect to any type of PACS system.

4. simplyDICOM Viewer (Free) – This viewer can be used to view locally stored Dicom images on an android phone. It has features like zoom, brightness, contrast, etc. but no annotation or MPR or connection with PACS.

5. Port-Ray Dicom Viewer (Free) – This viewer provides high speed and good connection to PACS. It includes tools like zooming, panning, brightness, contrast, annotation, MPR, etc. Option to encrypt the data is available to meet HIPAA or GDPR compliance. It is also available in the Google Play Store.


We discuss the pros and cons of the different DICOM viewers in this poster.

The poster can be viewed here: http://dx.doi.org/10.26044/ecr2020/C-14260

HIPAA, GDPR and Best Practice Guidelines for preserving data security and privacy – What Radiologists should know.

HIPAA, GDPR And Best Practice Guidelines For Preserving Data Security And Privacy – What Radiologists Should Know.


At the end of this exhibit, the reader is expected to learn the following things:

  • To understand the key aspects of HIPAA and GDPR Compliance from radiology perspective.
  • To understand the key differences between these regulations
  • To know the best practice principles to be followed while handling and using medical data from radiology images


    With the recent advances in deep learning and the boom of AI applications focusing on radiology, the medical imaging data has become a key resource for scientific progress. The large hospitals and imaging clinics have a plethora of such data, albeit unstructured but with accompanying clinical and other healthcare information. As much as it is crucial to use these data to build robust algorithms, it is also important to be careful of the critical privacy risks associated with sharing such data. In this back ground, it is important to understand the key features of the regulations governing these data.


    1. HIPAA Compliance

    Health Insurance Portability and Accountability Act (HIPAA) sets the basic standards towards the protection of sensitive patient data. This rule applies to individuals or organizations that get health information in the course of normal health practices. The covered entities include Hospitals, Health Plans, and other Healthcare Providers like radiology centers, etc. Health Plans are organizations that provide medical care or at least pay for them such as insurers etc. This rule protects all personally identifiable information of a patient. This information includes the patient\’s demographic information, past health records, etc. This rule does not apply if the information is deidentified according to the rule.

    This privacy rule allows revealing of patient data without his/her authorization only when some conditions are met:

    1. To agencies for health oversight activities like audits, etc.

    2. To law enforcement agencies.

    3. For any court proceedings, if requested.

    4. To business associates, if a proper agreement is signed stating he/she will not reveal this data.

    2. GDPR Compliance

    General Data Protection Regulation (GDPR) is enforced by the European Union after 2016, and after May 25, 2018, all organizations must be compliant. It is a new framework for data protection laws that started in 1995. It changes the way companies should store and transfer personal data of EU citizens and residents. It includes personal data that can be used as an identifier as name, identification number, location data, etc. This rule applies to all organizations (other than healthcare entities also) doing any type of processing or holding data of EU citizens, regardless of location. Individuals are given the right that they can restrict their data to be further processed and also can request data deletion(right to be forgotten). Also, this rule allows individuals to have easy access to their data which companies hold about them. The companies covered by this rule are responsible for processing and handling people\’s data. In recent years, there have been massive data breaches including social media accounts, healthcare data storages (PACS), etc. Any type of data breach, loss, destruction has to be reported to the country\’s data regulator within 72 hours of the incident under this rule. If an organization doesn\’t handle data correctly, it will be fined. These fines normally go up to 10 million euros or 2 percent of the organization\’s turnover. Sometimes, they go up to 20 million euros or 4 percent of turnover too.

    3. HIPAA vs GDPR

    The major differences between HIPAA and GDPR are:

    a) Under HIPAA, organizations can disclose patient data to another provider in some circumstances without consent. In GDPR, no patient data can go out of the organization\’s premises without the consent of the EU citizen or resident.

    b) GDPR gives EU citizens or residents right stating that under any specific circumstances, they can tell a healthcare provider to erase their data, where HIPAA does not give this right.

    c) Under a data breach, healthcare providers are required to notify affected subjects if following HIPAA, and if more than 500 subjects are affected, the Department of health and human services needs to be informed. In GDPR, there is a 72-hour window to report this data breach to a superior authority.

    d) Both the rules permit disclosure or processing of PHI (Personal Health Information) whenever necessary of an individual who is unable to give consent due to incapability.

    e) GDPR permits the processing of data to any not-for-profit organization only if this processing relates to the individual\’s personal family and not to any third party. HIPAA does not have this type of provision.

    f) Both rules permit disclosure of data when needed in any court acting in their judicial capacity.

    4. Tags need to be Anonymized before using for research purposes

    To use imaging data outside the healthcare provider\’s premises for any use such as research & deep learning, patient data needs to be removed or anonymized from images. Some tags that need to be anonymized which contains the patient and location data are as shown in figure.

    It is recommended that before using these images, consent should be taken that those images (Unidentified) can be used for research.

    Almost all major PACS companies and some open-source and proprietary tools offer anonymization tools with customizable tags. Also, some programming languages like Python can be used.


    Major regulations which set the rules to secure Patient Data has been discussed here. Also how to use imaging data for research and AI has also been discussed.

    The poster can be viewed here: 


Spleen or liver? – Prospective study to evaluate the role of Splenic and Hepatic Shear Wave elastography for evaluation of portal hyperdynamic circulation.

Spleen Or Liver? – Prospective Study To Evaluate The Role Of Splenic And Hepatic Shear Wave Elastography For Evaluation Of Portal Hyperdynamic Circulation.


In advanced fibrosis, liver stiffness measurements were shown to have no predictive potential for the severity of portal hypertension. We propose to test if an additional evaluation of splenic stiffness could improve the non-invasive prediction of portal hyperdynamic circulation. To compare the role of the liver (LSM) and splenic stiffness measurements (SSM) measured by shear wave elastography (SWE) in predicting the presence of esophagogastric varices (EGV) in patients with portal hypertension and to determine the correlation between the SSM and endoscopic grade of EGV.


  • This study included 40 patients with chronic liver disease being evaluated for portal hypertension and planned for esophagogastroduodenoscopy.
  • To measure liver stiffness, the region of interest is positioned in an area inthe right lobe free of vessels and bile ducts, at least 1.5 cm below the liver capsule. For spleen stiffness, the ROI is placed in the parenchyma of the lower pole at least 1 cm below the spleen capsule.
  • The shear wave liver stiffness (in kPa) was recorded at ten locations and the median values calculated. Endoscopic findings were interpreted with reference to the presence of varices and grade of the varices. Correlation between SSM, LSM, and grade of varices was analyzed with the Pearson correlation coefficient. Multiclass Receiver operating characteristic (ROC) curves were constructed, and the area under the ROC curves (AUC) was calculated to determine the discriminating power between the grades of the varices.


    LSM and variceal grade showed weak positive linear correlation (R = 0.36076, P <. 0001) whereas the SSM and variceal grade showed even weaker positive linear correlation (R = 0.20, P <. 0001). The AUC for the detection of varices was 0.77 for LSM and 0.63 for SSM respectively.


    Our results are against the conventional knowledge of the significant positive correlation between LSM, SSM and variceal grading.

    The poster can be viewed here: http://dx.doi.org/10.26044/ecr2020/C-14820

    Six must-have features for advanced analytics and visualization platform for validating new-age AI algorithms

    Six Must-Have Features For Advanced Analytics And Visualization Platform For Validating New-Age AI Algorithms


    In this exhibit, we discuss the six key must-have features on any analytical platform that is intended for the validation of AI algorithms. With the recent developments in machine learning and especially deep learning, a lot of companies are trying to develop solutions for assisting radiologists in medical imaging. We have developed a system that combines statistics with medical inputs to provide insights and validate deep learning algorithms at scale. One of the key challenges is the variety of output from these algorithms. The output could be a binary variable if the model is trying to predict whether the patient is suffering from a disease or not. Some algorithms, for example, predicting the nodule size, have a continuous variable as the output. Other algorithms can have even more complex outputs like the 3D boundary of intracranial hemorrhage. Our system presents the data to data scientists and medical practitioners in the simplest form possible with all the important insights. In addition to this, an integrated arbitration tool helps in validating the output with just a few clicks.


    We believe that the usage of such tools will decrease the time required for validating the deep learning algorithms in healthcare setup and at the same time, will provide useful insights to the companies which will help them in improving the algorithm further.


    Ability to fetch data from PACS: To conduct a study, the hospital/clinic should be able to easily search and extract cases. The tool should have features to filter cases on the basis of modalities, diseases, and other related fields. In addition to these, advanced features like semantic search can be really useful to capture the diversity of diseases and modalities. Our system provides features to include/exclude modalities and diseases.

    Client-side anonymization: Data privacy and security is an important aspect of any validation study. The system should have the ability to anonymize DICOM images on the browser side only. Uploading files on the cloud system and then anonymization could have serious implications in terms of data privacy.

    Cloud-Based computation: To conduct the studies at scale, it is essential to have a cloud-based deployment where the configuration of the system could be increased in real-time depending on the usage. Apart from computational flexibility, it also provides wider accessibility. Anyone with internet can access the system and conduct the study. As soon as the arbitrator uploads DICOM images, the processing of the cases starts automatically on our cloud-based system.

    Visualization: To present the data in the simplest and most meaningful form is the biggest challenge for any study. With a wide variety of outputs(for example, binary variable, a continuous variable, masking areas, etc), it is essential to present the data in a form that can help arbitrator to gauge the model accuracy in the best possible way. For example, for chest x-ray algorithms, our system provides the following scatter plot. Different colors and the number of cases of each type(Abnormal, Normal, Mismatch and Not Reported) help the arbitrator in understanding the crux in a single look. Apart from the plot, the ability to change the threshold provides an ability to the arbitrator to test the algorithm at different thresholds in real-time. ROC/AUC curve plays a key role in deciding the threshold for algorithms with binary output.

    Arbitration: For the cases where there is a mismatch between radiologists (ground truth) and algorithm’s output, it is essential to have a third-eye looking at the data to minimize any human error. The tool should have an interface where the arbitrator can see the ground truth and algorithm’s output and can act as a moderator. Ideally, a system should have a DICOM viewer integrated with an ability to input the arbitrator’s feedback.
    Summary Report: Once the arbitration process is done, the system should generate a summary of the algorithm’s performance on a set of parameters. These parameters can vary depending on the type of modality and the study.


    Once the arbitration process is done, the system should generate a summary of the algorithm’s performance on a set of parameters. These parameters can vary depending on the type of modality and the study.

    The poster can be viewed here: http://dx.doi.org/10.26044/ecr2020/C-14851