E-Bulletin, Issue 4 from July 7, 2020

ESR begins online conference 15 July; ACR and RSNA call for halt in regulatory pathways for autonomous AI

Welcome to our fourth monthly newsletter on neuroradiology and AI. We hope you find it helpful. Please feel free to share it with others.

If you would like to be informed when the new issue is out, please sign up here.

ACR and RSNA Call for a Halt in Regulatory Pathways for Autonomous AI: FDA regulations should focus on supporting physicians with AI, not replacing them

The American College of Radiology (ACR) and the Radiological Society of North America RSNA) called on the U.S. Food and Drug Administration (FDA) to delay developing regulatory pathways for autonomous artificial intelligence (AI) in radiology. They argued that there’s a lack of research-based criteria showing that algorithms are generalizable across clinical locations. They also said that published research is heavily weighted toward studies showing the algorithms perform poorly across heterogeneous patient populations. Instead, the ACR and RSNA called for spending regulatory resources on algorithms that assist clinicians in patient, rather than removing physicians from the process.

ECR 2020 begins 15 July: Online program replaces in-person conference

The European Congress of Radiology (ECR) begins its ECR 2020: the summer edition online, from 15 July through 19 July. The program includes more than 1,000 recorded abstracts and electronic poster presentations, and more than 30 live sessions online. Highlight weeks July through November cover specific subjects. Artificial intelligence will be featured 20 July through 23 July.

We are happy to announce that Cercare Medical is LIVE! at ECR 2020. You can learn more about what to expect at our stand here.

AI-POWERED IMAGING SOLUTIONS AT ECR 2020

Visit our virtual stand to learn more about AI-powered decision support and perfusion analysis for CT and MRI

Neuroradiology and AI in the news

Challenges and opportunities in making artificial intelligence (AI) interpretable in radiology. This review and opinion piece in Radiology: Artificial Intelligence suggests that AI systems entering clinical radiology practice should gain the trust of those using them. That means the approaches should be interpretable, so those relying on them understand the complex algorithms. The black-box problem with AI, masking how conclusions are reached, make it difficult for users to trust the results. Authors share interpretability methods in development to help explain AI systems through visualization, semantics, or counterexamples.

Implementation of model explainability for a basic brain tumor detection using convolutional neural networks on MRI slices. This study in Neuroradiology focused on why explaining convolutional neural network (CNN) models is important during early stages of training a CNN model in medicine. Authors suggest that this method offers long-term time-savings as well as helping clinicians better integrate the CNN-based predictions into clinical decision making. The study trained a CNN to differentiate vestibular schwannoma, glioblastoma or no tumor, or MRI slices, to demonstrate why it’s important to explain the CNN model in the early stages.

Machine learning predictions on neoplastic and non-neoplastic acute intracerebral hemorrhage in CT brain scans, using radiomic image features. Evaluating neoplastic and non-neoplastic intracerebral hemorrhage (ICH) with imaging studies for early differentiation is difficult, especially for extensive ICHs. This study in Frontiers in Neurology analyzes a machine learning-based prediction for acute ICH etiology, based on quantitative radiomic image features extracted from initial non-contrast-enhanced computed tomography (NECT) brain scans. The study included NECT brain scans from 77 patients with acute ICH (n=50 non-neoplastic, n=27 neoplastic). They extracted radiomic features like shape, histogram and texture markers from non-, wavelet-, and log-sigma-filtered images using ICH regions of interest and perihematomal edema (PHE). They concluded that the proposed approach in clinical practice could improve patient care with low risk and cost, as the process provided high discriminatory power to predict non-neoplastic versus neoplastic ICHs.

Brain Volume: An Important Determinant of Functional Outcome After Acute Ischemic Stroke. This study in Mayo Clinic Proceedings focused on determining whether brain volume is associated with functional outcome after acute ischemic stroke (AIS). With data from 912 patients who experienced AIS, researchers analyzed subjects’ clinical brain MRI obtained at admission for index stroke and functional outcome assessment. They used the modified Rankin Scale score for poststroke outcome, between 60 and 190 days poststroke. They concluded that at the time of stroke, a larger brain volume quantified on MRI correlated with a protective mechanism, resulting in brain volume as a prognostic, protective biomarker.

SIIM 2020: How to create robust radiology AI algorithms. At the Society for Imaging Informatics in Medicine (SIIM) virtual annual meeting in June, radiologist Daniel Rubin, MD of Stanford University and Jayashree Kalpathy-Cramer, PhD, of Massachusetts General Hospital and Brigham and Women’s Hospital Center for Clinical Data Science, discussed ways to improve the robustness of AI models. The problems with AI model robustness includes leveraging data from multiple sites, adhering to annotation standards, and lack of tools to evaluate AI in clinical practice. The speakers provide solutions to addressing the noted challenges in this AuntMinnie.com presentation recap.

Urban-rural inequities in acute stroke care and in-hospital mortality. Patients experiencing a stroke in a rural area are less likely to receive the most advanced treatments and are more likely to die, compared to those with similar diagnoses in urban areas, according to a new study in Stroke. The study covered cases from 2012 – 2017, with no change in annual mortality gap during that time. Specialist access is a limiting factor in rural care, including neurologists, interventional neurologists, neurosurgeons and radiologists.

Studies continue to show that AI algorithms are gaining scientific validation in neuroradiology spaces. Especially in cases of rural care, where specialists are less available, AI algorithms can quickly and accurately provide expert guidance on imaging studies. These algorithms not only supplement the work of all radiology experience levels, but provide high quality data, alerting physicians to prioritize certain patients in the workflow. Increasingly, it’s important to understand AI capabilities and begin incorporating these tools in the healthcare environment.

Would you like to learn more about how AI works and how it can be implemented in your department? Reach out to our experts