ORCID Profile
0000-0003-0435-0879
Current Organisations
Massachusetts General Hospital
,
University of Oxford
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
Publisher: Elsevier BV
Date: 03-2023
Publisher: Wiley
Date: 28-06-2008
Publisher: Cold Spring Harbor Laboratory
Date: 18-01-2023
DOI: 10.1101/2023.01.16.23284632
Abstract: Acute ischemic stroke can be subtle to detect on non-contrast computed tomography imaging. We show that a novel artificial intelligence model significantly improves the performance of physicians, including ED physicians, neurologists and radiologists, in identifying and quantifying the volume of acute ischemic stroke lesions. This model may lead to improved clinical decision-making for stroke patients.
Publisher: Georg Thieme Verlag KG
Date: 02-2022
Abstract: Artificial intelligence is already innovating in the provision of neurologic care. This review explores key artificial intelligence concepts their application to neurologic diagnosis, prognosis, and treatment and challenges that await their broader adoption. The development of new diagnostic biomarkers, in idualization of prognostic information, and improved access to treatment are among the plethora of possibilities. These advances, however, reflect only the tip of the iceberg for the ways in which artificial intelligence may transform neurologic care in the future.
Publisher: MDPI AG
Date: 30-07-2022
DOI: 10.3390/DIAGNOSTICS12081844
Abstract: (1) Background: Optimal anatomic coverage is important for radiation-dose optimization. We trained and tested (R2.2.4) two (R3-2) deep learning (DL) algorithms on a machine vision tool library platform (Cognex Vision Pro Deep Learning software) to recognize anatomic landmarks and classify chest CT as those with optimum, under-scanned, or over-scanned scan length. (2) Methods: To test our hypothesis, we performed a study with 428 consecutive chest CT examinations (mean age 70 ± 14 years male:female 190:238) performed at one of the four hospitals. CT examinations from two hospitals were used to train the DL classification algorithms to identify lung apices and bases. The developed algorithms were then tested on the data from the remaining two hospitals. For each CT, we recorded the scan lengths above and below the lung apices and bases. Model performance was assessed with receiver operating characteristics (ROC) analysis. (3) Results: The two DL models for lung apex and bases had high sensitivity, specificity, accuracy, and areas under the curve (AUC) for identifying under-scanning (100%, 99%, 99%, and 0.999 (95% CI 0.996–1.000)) and over-scanning (99%, 99%, 99%, and 0.998 (95%CI 0.992–1.000)). (4) Conclusions: Our DL models can accurately identify markers for missing anatomic coverage and over-scanning in chest CTs.
Publisher: Springer Science and Business Media LLC
Date: 05-01-2023
DOI: 10.1038/S41598-023-27496-5
Abstract: Non-contrast head CT (NCCT) is extremely insensitive for early ( 3–6 h) acute infarct identification. We developed a deep learning model that detects and delineates suspected early acute infarcts on NCCT, using diffusion MRI as ground truth (3566 NCCT/MRI training patient pairs). The model substantially outperformed 3 expert neuroradiologists on a test set of 150 CT scans of patients who were potential candidates for thrombectomy (60 stroke-negative, 90 stroke-positive middle cerebral artery territory only infarcts), with sensitivity 96% (specificity 72%) for the model versus 61–66% (specificity 90–92%) for the experts model infarct volume estimates also strongly correlated with those of diffusion MRI (r 2 0.98). When this 150 CT test set was expanded to include a total of 364 CT scans with a more heterogeneous distribution of infarct locations (94 stroke-negative, 270 stroke-positive mixed territory infarcts), model sensitivity was 97%, specificity 99%, for detection of infarcts larger than the 70 mL volume threshold used for patient selection in several major randomized controlled trials of thrombectomy treatment.
Publisher: Springer Science and Business Media LLC
Date: 22-08-2016
Publisher: Georg Thieme Verlag KG
Date: 04-2018
Abstract: Neurology training is essential for providing neurologic care globally. Large disparities in availability of neurology training exist between higher- and lower-income countries. This review explores the worldwide distribution of neurology training programs and trainees, the characteristics of training programs in different parts of the world, and initiatives aimed at increasing access to neurology training in under-resourced regions.
Publisher: Ovid Technologies (Wolters Kluwer Health)
Date: 07-05-2018
Publisher: Cold Spring Harbor Laboratory
Date: 23-06-2022
DOI: 10.1101/2022.06.23.22276818
Abstract: Motion-impaired CT images can result in limited or suboptimal diagnostic interpretation (with missed or miscalled lesions) and patient recall. We trained and tested an artificial intelligence (AI) model for identifying substantial motion artifacts on CT pulmonary angiography (CTPA) that have a negative impact on diagnostic interpretation. With IRB approval and HIPAA compliance, we queried our multicenter radiology report database (mPower, Nuance) for CTPA reports between July 2015 - March 2022 for the following terms: “motion artifacts,” “respiratory motion,” “technically inadequate,” and “suboptimal” or “limited exam.” All CTPA reports belonged to two quaternary (Site A, n= 335 B, n= 259) and a community (C, n= 199) healthcare sites. A thoracic radiologist reviewed CT images of all positive hits for motion artifacts (present or absent) and their severity (no diagnostic effect or major diagnostic impairment). Coronal multiplanar images belonging to 793 CTPA exams were de-identified and exported offline into an AI model building prototype (Cognex Vision Pro, Cognex Corporation) to train an AI model to perform two-class classification (“motion” or “no motion”) with data from the three sites (70% training dataset, n= 554 30% validation dataset, n= 239). Separately, data from Site A and Site C were used for training and validating testing was performed on the Site B CTPA exams. A 5-fold repeated cross-validation was performed to evaluate the model performance with accuracy and receiver operating characteristics analysis (ROC). Among the CTPA images from 793 patients (mean age 63 ± 17 years 391 males, 402 females), 372 had no motion artifacts, and 421 had substantial motion artifacts. The statistics for the average performance of the AI model after 5-fold repeated cross-validation for the two-class classification included 94% sensitivity, 91% specificity, 93% accuracy, and 0.93 area under the ROC curve (AUC: 95% CI 0.89-0.97). The AI model used in this study can successfully identify CTPA exams with diagnostic interpretation limiting motion artifacts in multicenter training and test datasets. The AI model used in the study can help alert the technologists about the presence of substantial motion artifacts on CTPA where a repeat image acquisition can help salvage diagnostic information.
Publisher: Elsevier BV
Date: 05-2019
DOI: 10.1016/J.JNEUROIM.2019.03.008
Abstract: We describe the case of a 53-year-old woman who undergoes total splenectomy and later presents with aquaporin-4 antibody positive neuromyelitis optica (NMO). The occurrence of NMO after acquired immunosuppression raises the possibility of NMO as a form of secondary autoimmunity.
Publisher: Wiley
Date: 20-04-2012
Publisher: Springer Science and Business Media LLC
Date: 09-02-2022
DOI: 10.1038/S41598-022-06021-0
Abstract: Stroke is a leading cause of death and disability. The ability to quickly identify the presence of acute infarct and quantify the volume on magnetic resonance imaging (MRI) has important treatment implications. We developed a machine learning model that used the apparent diffusion coefficient and diffusion weighted imaging series. It was trained on 6,657 MRI studies from Massachusetts General Hospital (MGH Boston, USA). All studies were labelled positive or negative for infarct (classification annotation) with 377 having the region of interest outlined (segmentation annotation). The different annotation types facilitated training on more studies while not requiring the extensive time to manually segment every study. We initially validated the model on studies sequestered from the training set. We then tested the model on studies from three clinical scenarios: consecutive stroke team activations for 6-months at MGH, consecutive stroke team activations for 6-months at a hospital that did not provide training data (Brigham and Women’s Hospital [BWH] Boston, USA), and an international site (Diagnósticos da América SA [DASA] Brazil). The model results were compared to radiologist ground truth interpretations. The model performed better when trained on classification and segmentation annotations (area under the receiver operating curve [AUROC] 0.995 [95% CI 0.992–0.998] and median Dice coefficient for segmentation overlap of 0.797 [IQR 0.642–0.861]) compared to segmentation annotations alone (AUROC 0.982 [95% CI 0.972–0.990] and Dice coefficient 0.776 [IQR 0.584–0.857]). The model accurately identified infarcts for MGH stroke team activations (AUROC 0.964 [95% CI 0.943–0.982], 381 studies), BWH stroke team activations (AUROC 0.981 [95% CI 0.966–0.993], 247 studies), and at DASA (AUROC 0.998 [95% CI 0.993–1.000], 171 studies). The model accurately segmented infarcts with Pearson correlation comparing model output and ground truth volumes between 0.968 and 0.986 for the three scenarios. Acute infarct can be accurately detected and segmented on MRI in real-world clinical scenarios using a machine learning model.
Publisher: Ovid Technologies (Wolters Kluwer Health)
Date: 05-01-2021
DOI: 10.1212/NXI.0000000000000936
Abstract: To determine whether studying patients with strictly unilateral relapsing primary angiitis of the CNS (UR-PACNS) can support hemispheric differences in immune response mechanisms, we reviewed characteristics of a group of such patients. We surveiled our institution for patients with UR-PACNS, after characterizing one such case. We defined UR-PACNS as PACNS with clinical and radiographic relapses strictly recurring in 1 brain hemisphere, with or without hemiatrophy. PACNS must have been biopsy proven. Three total cases were identified at our institution. A literature search for similar reports yielded 4 additional cases. The combined 7 cases were reviewed for demographic, clinical, imaging, and pathologic trends. The median age at time of clinical onset among the 7 cases was 26 years (range 10–49 years) 5 were male (71%). All 7 patients presented with seizures. The mean follow-up duration was 7.5 years (4–14.1 years). The annualized relapse rate ranged between 0.2 and 1. UR-PACNS involved the left cerebral hemisphere in 5 of the 7 patients. There was no consistent relationship between the patient's dominant hand and the diseased side. When performed (5 cases), conventional angiogram was nondiagnostic. CSF examination showed nucleated cells and protein levels in normal range in 3 cases and ranged from 6 to 11 cells/μL and 49 to 110 mg/dL in 4 cases, respectively. All cases were diagnosed with lesional biopsy, showing lymphocytic type of vasculitis of the small- and medium-sized vessels. Patients treated with steroids alone showed progression. Induction therapy with cyclophosphamide or rituximab followed by a steroid sparing agent resulted in the most consistent disease remission. Combining our 3 cases with others reported in the literature allows better clinical understanding about this rare and extremely puzzling disease entity. We hypothesize that a functional difference in immune responses, caused by such discrepancies as basal levels of cytokines, asymmetric distribution of microglia, and differences in modulation of the systemic immune functions, rather than a structural antigenic difference, between the right and left brain may explain this phenomenon, but this is speculative.
Publisher: Wiley
Date: 04-09-2015
DOI: 10.1002/GLIA.22906
Publisher: Research Square Platform LLC
Date: 29-06-2021
DOI: 10.21203/RS.3.RS-647830/V1
Abstract: Background Stroke is a leading cause of death and disability. The ability to quickly identify the presence of acute infarct and quantify the volume on magnetic resonance imaging (MRI) has important treatment implications. Methods We developed a machine learning model that used the apparent diffusion coefficient and diffusion weighted imaging series. It was trained on 6,657 MRI studies. All studies were labelled positive or negative for infarct (classification annotation) with 377 having the region of interest outlined (segmentation annotation). The different annotation types facilitated training on more studies while not requiring the extensive time to manually segment every study. We initially validated the model on studies sequestered from the training set. We then tested the model on studies from three clinical scenarios: consecutive stroke team activations for 6-months at the hospital that provided training data, consecutive stroke team activations for 6-months at a hospital that did not provide training data, and an international site. The model results were compared to radiologist ground truth interpretations. Results The model performed better when trained on classification and segmentation annotations (area under the receiver operating curve [AUROC] 0.995 [95% CI, 0.992-0.998] and median Dice coefficient for segmentation overlap of 0.797 [IQR, 0.642-0.861]) compared to segmentation annotations alone (AUROC 0.982 [95% CI, 0.972-0.990] and Dice coefficient 0.776 [IQR, 0.584-0.857]). The model accurately identified infarcts for training hospital stroke team activations (AUROC 0.964 [95% CI, 0.943-0.982], 381 studies), non-training hospital stroke team activations (AUROC 0.981 [95% CI, 0.966-0.993], 247 studies), and at the international site (AUROC 0.998 [95% CI, 0.993-1.000], 171 studies). The model accurately segmented infarcts with Pearson correlation comparing model output and ground truth volumes between 0.968-0.986 for the three scenarios. Conclusions Acute infarct can be accurately detected and segmented on MRI in real-world clinical scenarios using a machine learning model.
Publisher: Georg Thieme Verlag KG
Date: 08-2018
Abstract: The neurological examination remains the essence of neurology. It allows symptoms to be assessed, diagnoses to be made, and dynamic functions to be followed. Skill in the neurological examination has faced increasing challenges from the encroachment of diagnostic imaging, but has maintained its clinical utility. It has also encountered the battle for the precious time within a medical curriculum. This review considers how the neurological examination can best be taught into the future. It does so by considering factors related to the examination, the learner, the teacher, and the modern clinical environment.
Publisher: MDPI AG
Date: 05-11-2021
Abstract: Galectin-3 (Gal-3) is an evolutionarily conserved and multifunctional protein that drives inflammation in disease. Gal-3’s role in the central nervous system has been less studied than in the immune system. However, recent studies show it exacerbates Alzheimer’s disease and is upregulated in a large variety of brain injuries, while loss of Gal-3 function can diminish symptoms of neurodegenerative diseases such as Alzheimer’s. Several novel molecular pathways for Gal-3 were recently uncovered. It is a natural ligand for TREM2 (triggering receptor expressed on myeloid cells), TLR4 (Toll-like receptor 4), and IR (insulin receptor). Gal-3 regulates a number of pathways including stimulation of bone morphogenetic protein (BMP) signaling and modulating Wnt signalling in a context-dependent manner. Gal-3 typically acts in pathology but is now known to affect subventricular zone (SVZ) neurogenesis and gliogenesis in the healthy brain. Despite its myriad interactors, Gal-3 has surprisingly specific and important functions in regulating SVZ neurogenesis in disease. Gal-1, a similar lectin often co-expressed with Gal-3, also has profound effects on brain pathology and adult neurogenesis. Remarkably, Gal-3’s carbohydrate recognition domain bears structural similarity to the SARS-CoV-2 virus spike protein necessary for cell entry. Gal-3 can be targeted pharmacologically and is a valid target for several diseases involving brain inflammation. The wealth of molecular pathways now known further suggest its modulation could be therapeutically useful.
Publisher: MDPI AG
Date: 18-02-2023
DOI: 10.3390/DIAGNOSTICS13040778
Abstract: Purpose: Motion-impaired CT images can result in limited or suboptimal diagnostic interpretation (with missed or miscalled lesions) and patient recall. We trained and tested an artificial intelligence (AI) model for identifying substantial motion artifacts on CT pulmonary angiography (CTPA) that have a negative impact on diagnostic interpretation. Methods: With IRB approval and HIPAA compliance, we queried our multicenter radiology report database (mPower, Nuance) for CTPA reports between July 2015 and March 2022 for the following terms: “motion artifacts”, “respiratory motion”, “technically inadequate”, and “suboptimal” or “limited exam”. All CTPA reports were from two quaternary (Site A, n = 335 B, n = 259) and a community (C, n = 199) healthcare sites. A thoracic radiologist reviewed CT images of all positive hits for motion artifacts (present or absent) and their severity (no diagnostic effect or major diagnostic impairment). Coronal multiplanar images from 793 CTPA exams were de-identified and exported offline into an AI model building prototype (Cognex Vision Pro, Cognex Corporation) to train an AI model to perform two-class classification (“motion” or “no motion”) with data from the three sites (70% training dataset, n = 554 30% validation dataset, n = 239). Separately, data from Site A and Site C were used for training and validating testing was performed on the Site B CTPA exams. A five-fold repeated cross-validation was performed to evaluate the model performance with accuracy and receiver operating characteristics analysis (ROC). Results: Among the CTPA images from 793 patients (mean age 63 ± 17 years 391 males, 402 females), 372 had no motion artifacts, and 421 had substantial motion artifacts. The statistics for the average performance of the AI model after five-fold repeated cross-validation for the two-class classification included 94% sensitivity, 91% specificity, 93% accuracy, and 0.93 area under the ROC curve (AUC: 95% CI 0.89–0.97). Conclusion: The AI model used in this study can successfully identify CTPA exams with diagnostic interpretation limiting motion artifacts in multicenter training and test datasets. Clinical relevance: The AI model used in the study can help alert technologists about the presence of substantial motion artifacts on CTPA, where a repeat image acquisition can help salvage diagnostic information.
Publisher: Cold Spring Harbor Laboratory
Date: 08-09-2023
Publisher: American Medical Association (AMA)
Date: 15-12-2022
DOI: 10.1001/JAMANETWORKOPEN.2022.47172
Abstract: Early detection of pneumothorax, most often via chest radiography, can help determine need for emergent clinical intervention. The ability to accurately detect and rapidly triage pneumothorax with an artificial intelligence (AI) model could assist with earlier identification and improve care. To compare the accuracy of an AI model vs consensus thoracic radiologist interpretations in detecting any pneumothorax (incorporating both nontension and tension pneumothorax) and tension pneumothorax. This diagnostic study was a retrospective standalone performance assessment using a data set of 1000 chest radiographs captured between June 1, 2015, and May 31, 2021. The radiographs were obtained from patients aged at least 18 years at 4 hospitals in the Mass General Brigham hospital network in the United States. Included radiographs were selected using 2 strategies from all chest radiography performed at the hospitals, including inpatient and outpatient. The first strategy identified consecutive radiographs with pneumothorax through a manual review of radiology reports, and the second strategy identified consecutive radiographs with tension pneumothorax using natural language processing. For both strategies, negative radiographs were selected by taking the next negative radiograph acquired from the same radiography machine as each positive radiograph. The final data set was an amalgamation of these processes. Each radiograph was interpreted independently by up to 3 radiologists to establish consensus ground-truth interpretations. Each radiograph was then interpreted by the AI model for the presence of pneumothorax and tension pneumothorax. This study was conducted between July and October 2021, with the primary analysis performed between October and November 2021. The primary end points were the areas under the receiver operating characteristic curves (AUCs) for the detection of pneumothorax and tension pneumothorax. The secondary end points were the sensitivities and specificities for the detection of pneumothorax and tension pneumothorax. The final analysis included radiographs from 985 patients (mean [SD] age, 60.8 [19.0] years 436 [44.3%] female patients), including 307 patients with nontension pneumothorax, 128 patients with tension pneumothorax, and 550 patients without pneumothorax. The AI model detected any pneumothorax with an AUC of 0.979 (95% CI, 0.970-0.987), sensitivity of 94.3% (95% CI, 92.0%-96.3%), and specificity of 92.0% (95% CI, 89.6%-94.2%) and tension pneumothorax with an AUC of 0.987 (95% CI, 0.980-0.992), sensitivity of 94.5% (95% CI, 90.6%-97.7%), and specificity of 95.3% (95% CI, 93.9%-96.6%). These findings suggest that the assessed AI model accurately detected pneumothorax and tension pneumothorax in this chest radiograph data set. The model’s use in the clinical workflow could lead to earlier identification and improved care for patients with pneumothorax.
Publisher: BMJ
Date: 07-2021
Abstract: Expanding the US Food and Drug Administration–approved indications for immune checkpoint inhibitors in patients with cancer has resulted in therapeutic success and immune-related adverse events (irAEs). Neurologic irAEs (irAE-Ns) have an incidence of 1%–12% and a high fatality rate relative to other irAEs. Lack of standardized disease definitions and accurate phenotyping leads to syndrome misclassification and impedes development of evidence-based treatments and translational research. The objective of this study was to develop consensus guidance for an approach to irAE-Ns including disease definitions and severity grading. A working group of four neurologists drafted irAE-N consensus guidance and definitions, which were reviewed by the multidisciplinary Neuro irAE Disease Definition Panel including oncologists and irAE experts. A modified Delphi consensus process was used, with two rounds of anonymous ratings by panelists and two meetings to discuss areas of controversy. Panelists rated content for usability, appropriateness and accuracy on 9-point scales in electronic surveys and provided free text comments. Aggregated survey responses were incorporated into revised definitions. Consensus was based on numeric ratings using the RAND/University of California Los Angeles (UCLA) Appropriateness Method with prespecified definitions. 27 panelists from 15 academic medical centers voted on a total of 53 rating scales (6 general guidance, 24 central and 18 peripheral nervous system disease definition components, 3 severity criteria and 2 clinical trial adjudication statements) of these, 77% (41/53) received first round consensus. After revisions, all items received second round consensus. Consensus definitions were achieved for seven core disorders: irMeningitis, irEncephalitis, irDemyelinating disease, irVasculitis, irNeuropathy, irNeuromuscular junction disorders and irMyopathy. For each disorder, six descriptors of diagnostic components are used: disease subtype, diagnostic certainty, severity, autoantibody association, exacerbation of pre-existing disease or de novo presentation, and presence or absence of concurrent irAE(s). These disease definitions standardize irAE-N classification. Diagnostic certainty is not always directly linked to certainty to treat as an irAE-N (ie, one might treat events in the probable or possible category). Given consensus on accuracy and usability from a representative panel group, we anticipate that the definitions will be used broadly across clinical and research settings.
Publisher: Cold Spring Harbor Laboratory
Date: 10-07-2022
DOI: 10.1101/2022.07.07.22277305
Abstract: Early detection of pneumothorax, most often on chest radiograph (CXR), can help determine need for emergent clinical intervention. The ability to accurately detect and rapidly triage pneumothorax with an artificial intelligence (AI) model could assist with earlier identification and improve care. This study aimed to compare the accuracy of an AI model (Annalise Enterprise) to consensus thoracic radiologist interpretations in detecting (1) pneumothorax (incorporating both non-tension and tension pneumothorax) and (2) tension pneumothorax. A retrospective standalone performance assessment was conducted on a dataset of 1,000 CXR cases. The cases were obtained from four hospitals in the United States. The cases were obtained from patients aged 18 years or older. They were selected using two strategies from all CXRs performed at the hospitals including inpatients and outpatients. The first strategy identified consecutive pneumothorax cases through a manual review of radiology reports and the second strategy identified consecutive tension pneumothorax cases using natural language processing. For both strategies, negative cases were selected by taking the next negative case acquired from the same x-ray machine. The final dataset was an amalgamation of these processes. Each case was interpreted independently by up to three radiologists to establish consensus ground truth interpretations. Each case was then interpreted by the AI model for the presence of pneumothorax and tension pneumothorax. The primary endpoints were the areas under the receiver operating characteristic curves (AUCs) for the detection of pneumothorax and tension pneumothorax. The secondary endpoints were the sensitivities and specificities for the detection of pneumothorax and tension pneumothorax at predefined operating points. Model inference was successfully performed in 307 non-tension pneumothorax, 128 tension pneumothorax and 550 negative cases. The AI model detected pneumothorax with AUC of 0.979 (94.3% sensitivity, 92.0% specificity) and tension pneumothorax with AUC of 0.987 (94.5% sensitivity, 95.3% specificity). The assessed AI model accurately detected pneumothorax and tension pneumothorax on this CXR dataset. Its use in the clinical workflow could lead to earlier identification and improved care for patients with pneumothorax. Does a commercial artificial intelligence model accurately detect simple and tension pneumothorax on chest x-ray? This retrospective study used 1,000 chest x-rays from four hospitals in the United States to compare artificial intelligence model outputs to consensus thoracic radiologist interpretations. The model detected pneumothorax (incorporating both simple and tension pneumothorax) with area under the curve (AUC) of 0.979 and tension pneumothorax with AUC of 0.987. The sensitivity and specificity were 94.3% and 92.0% respectively for pneumothorax, and 94.5% and 95.3% for tension pneumothorax. This artificial intelligence model could assist radiologists through its accurate detection of pneumothorax.
Location: United Kingdom of Great Britain and Northern Ireland
No related grants have been discovered for James Hillis.