ORCID Profile
0000-0002-3161-1395
Current Organisation
Charité Universitätsmedizin Berlin
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
Publisher: Center for Open Science
Date: 03-05-2023
Abstract: Biomedical research is experiencing a data explosion, yet the accumulation of vast quantities of data alone does not guarantee a primary objective for science: building upon existing knowledge. Data collected that lack appropriate metadata cannot be fully interrogated or integrated into new research projects, leading to wasted resources and missed opportunities for data repurposing. This issue is particularly acute for research using animals, where concerns around data reproducibility and ensuring animal welfare are paramount. To address this, we propose a minimal metadata set (MNMS) designed to enable repurposing of in vivo data. MNMS builds into an existing validated guideline for reporting in vivo data (ARRIVE 2.0) and contributes to making in vivo data FAIR compliant. Scenarios where MNMS can be deployed in erse research environments are presented, highlighting opportunities and challenges for data repurposing at different scales. We conclude with a ‘call for action’ to key stakeholders in biomedical research to adopt and deploy MNMS to accelerate both the advancement of knowledge and the betterment of animal welfare
Publisher: Springer Nature Switzerland
Date: 2023
Publisher: Elsevier BV
Date: 09-2023
Publisher: Portico
Date: 05-2019
Publisher: Portico
Date: 02-2019
Publisher: Center for Open Science
Date: 05-12-2022
Abstract: Introduction: The gut may play an important role in Major Depressive Disorder and interventions targeting gut microbiota- may serve as potential treatments. Prebiotics have been reported to reduce anxiety and depressive-like phenotypes in mice, rats, and humans. Bimuno®, a commercially available beta-galactooligasaccharide, has been shown to increase gut microbiota ersity. Aim: To investigate the effect of Bimuno® on rat anxiety- and depressive-like behaviour in the Flinders Sensitive (FSL) and Resistant Line (FRL) rats. Methods: Guided by our published protocol, 64 8-14 week old male rats, (32 FSL and 32 FRL), were randomised to receive Bimuno® or control (4g/kg) daily for 4 weeks. We assessed despair (using the Forced Swim Test) and anxiety-like (Elevated Plus Maze) behaviours, and locomotion using the open field test. All behavioural tests were conducted and assessed blinded to rat line and treatment allocation. We recorded animals’ weight and food and water intake weekly. We used two-way ANOVA to investigate the effect of treatment (control or prebiotic) and strain (FSL or FRL) on despair and anxiety-like behaviours. Results: Treatment with Bimuno® had no effect on performance in the Forced Swim Test or the Elevated Plus Maze. We observed the expected behavioural differences between the FSL and FRL rats. Discussion: We only used male animals and our s le size calculation was informed by published data. While confirmatory studies are needed, the present study questions the role of the microbiome reported in the literature.
Publisher: Springer Science and Business Media LLC
Date: 23-03-2020
Publisher: Elsevier BV
Date: 05-2020
Publisher: Cold Spring Harbor Laboratory
Date: 30-01-2018
DOI: 10.1101/256776
Abstract: Meta-analysis is increasingly used to summarise the findings identified in systematic reviews of animal studies modelling human disease. Such reviews typically identify a large number of in idually small studies, testing efficacy under a variety of conditions. This leads to substantial heterogeneity, and identifying potential sources of this heterogeneity is an important function of such analyses. However, the statistical performance of different approaches (normalised compared with standardised mean difference estimates of effect size stratified meta-analysis compared with meta-regression) is not known. Using data from 3116 experiments in focal cerebral ischaemia to construct a linear model predicting observed improvement in outcome contingent on 25 independent variables. We used stochastic simulation to attribute these variables to simulated studies according to their prevalence. To ascertain the ability to detect an effect of a given variable we introduced in addition this “variable of interest” of given prevalence and effect. To establish any impact of a latent variable on the apparent influence of the variable of interest we also introduced a “latent confounding variable” with given prevalence and effect, and allowed the prevalence of the variable of interest to be different in the presence and absence of the latent variable. Generally, the normalised mean difference (NMD) approach had higher statistical power than the standardised mean difference (SMD) approach. Even when the effect size and the number of studies contributing to the meta-analysis was small, there was good statistical power to detect the overall effect, with a low false positive rate. For detecting an effect of the variable of interest, stratified meta-analysis was associated with a substantial false positive rate with NMD estimates of effect size, while using an SMD estimate of effect size had very low statistical power. Univariate and multivariable meta-regression performed substantially better, with low false positive rate for both NMD and SMD approaches power was higher for NMD than for SMD. The presence or absence of a latent confounding variables only introduced an apparent effect of the variable of interest when there was substantial asymmetry in the prevalence of the variable of interest in the presence or absence of the confounding variable. In meta-analysis of data from animal studies, NMD estimates of effect size should be used in preference to SMD estimates, and meta-regression should, where possible, be chosen over stratified meta-analysis. The power to detect the influence of the variable of interest depends on the effect of the variable of interest and its prevalence, but unless effects are very large adequate power is only achieved once at least 100 experiments are included in the meta-analysis.
Publisher: European Association for Health Information and Libraries EAHIL
Date: 24-06-2021
DOI: 10.32384/JEAHIL17465
Abstract: Throughout the global coronavirus pandemic, we have seen an unprecedented volume of COVID-19 researchpublications. This vast body of evidence continues to grow, making it difficult for research users to keep up with the pace of evolving research findings. To enable the synthesis of this evidence for timely use by researchers, policymakers, and other stakeholders, we developed an automated workflow to collect, categorise, and visualise the evidence from primary COVID-19 research studies. We trained a crowd of volunteer reviewers to annotate studies by relevance to COVID-19, study objectives, and methodological approaches. Using these human decisions, we are training machine learning classifiers and applying text-mining tools to continually categorise the findings and evaluate the quality of COVID-19 evidence.
Publisher: Springer Science and Business Media LLC
Date: 15-01-2019
Publisher: Portico
Date: 07-2021
Publisher: Springer Science and Business Media LLC
Date: 03-06-2022
DOI: 10.1186/S13643-022-01985-6
Abstract: Rigorous evidence is vital in all disciplines to ensure efficient, appropriate, and fit-for-purpose decision-making with minimised risk of unintended harm. To date, however, disciplines have been slow to share evidence synthesis frameworks, best practices, and tools amongst one another. Recent progress in collaborative digital and programmatic frameworks, such as the free and Open Source software R, have significantly expanded the opportunities for development of free-to-use, incrementally improvable, community driven tools to support evidence synthesis (e.g. EviAtlas, robvis, PRISMA2020 flow diagrams and metadat). Despite this, evidence synthesis (and meta-analysis) practitioners and methodologists who make use of R remain relatively disconnected from one another. Here, we report on a new virtual conference for evidence synthesis and meta-analysis in the R programming environment (ESMARConf) that aims to connect these communities. By designing an entirely free and online conference from scratch, we have been able to focus efforts on maximising accessibility and equity—making these core missions for our new community of practice. As a community of practice, ESMARConf builds on the success and groundwork of the broader R community and systematic review coordinating bodies (e.g. Cochrane), but fills an important niche. ESMARConf aims to maximise accessibility and equity of participants across regions, contexts, and social backgrounds, forging a level playing field in a digital, connected, and online future of evidence synthesis. We believe that everyone should have the same access to participation and involvement, and we believe ESMARConf provides a vital opportunity to push for equitability across disciplines, regions, and personal situations.
Publisher: Center for Open Science
Date: 08-07-2019
Abstract: Systematic review and meta-analysis are powerful tools to provide an unbiased overview of all available literature addressing a specific research question. However, systematic reviews are resource-intensive. To address this, the development of automation tools to aid systematic review research is increasing. But despite the development of these automation tools, recent research suggests that uptake of these tools is slow among evidence synthesis researchers and are potential barriers to using automation tools which include: steep learning curve, mismatched workflow, and lack of support. Here we propose a set of standards for automation tools and platforms that have been built to aid the systematic review community. The aim of these standards is to improve the integration of different tools into the research process and to increase transparency in the field of automation tools for evidence synthesis. The technical standards set out a minimum level and format of documentation required for publishing and disseminating automation tools. Further, we present an orchestrator platform, the Integration Interface, a system to bring compliant automation tools together, independent of programming language, into a succinct workflow. The Integration Interface aims to reduce the barriers associated with using a single or multiple automation tools in the evidence synthesis research process.
Publisher: Portland Press Ltd.
Date: 05-2023
DOI: 10.1042/CS20220494
Abstract: Systematic reviews and meta-analysis are the cornerstones of evidence-based decision making and priority setting. However, traditional systematic reviews are time and labour intensive, limiting their feasibility to comprehensively evaluate the latest evidence in research-intensive areas. Recent developments in automation, machine learning and systematic review technologies have enabled efficiency gains. Building upon these advances, we developed Systematic Online Living Evidence Summaries (SOLES) to accelerate evidence synthesis. In this approach, we integrate automated processes to continuously gather, synthesise and summarise all existing evidence from a research domain, and report the resulting current curated content as interrogatable databases via interactive web applications. SOLES can benefit various stakeholders by (i) providing a systematic overview of current evidence to identify knowledge gaps, (ii) providing an accelerated starting point for a more detailed systematic review, and (iii) facilitating collaboration and coordination in evidence synthesis.
Publisher: Portland Press Ltd.
Date: 13-10-2017
DOI: 10.1042/CS20160722
Abstract: Background: Findings from in vivo research may be less reliable where studies do not report measures to reduce risks of bias. The experimental stroke community has been at the forefront of implementing changes to improve reporting, but it is not known whether these efforts are associated with continuous improvements. Our aims here were firstly to validate an automated tool to assess risks of bias in published works, and secondly to assess the reporting of measures taken to reduce the risk of bias within recent literature for two experimental models of stroke. Methods: We developed and used text analytic approaches to automatically ascertain reporting of measures to reduce risk of bias from full-text articles describing animal experiments inducing middle cerebral artery occlusion (MCAO) or modelling lacunar stroke. Results: Compared with previous assessments, there were improvements in the reporting of measures taken to reduce risks of bias in the MCAO literature but not in the lacunar stroke literature. Accuracy of automated annotation of risk of bias in the MCAO literature was 86% (randomization), 94% (blinding) and 100% (s le size calculation) and in the lacunar stroke literature accuracy was 67% (randomization), 91% (blinding) and 96% (s le size calculation). Discussion: There remains substantial opportunity for improvement in the reporting of animal research modelling stroke, particularly in the lacunar stroke literature. Further, automated tools perform sufficiently well to identify whether studies report blinded assessment of outcome, but improvements are required in the tools to ascertain whether randomization and a s le size calculation were reported.
Publisher: Springer Science and Business Media LLC
Date: 2021
Publisher: EMBO
Date: 20-12-2022
Publisher: Cold Spring Harbor Laboratory
Date: 31-01-2018
DOI: 10.1101/255760
Abstract: Here we outline a method of applying existing machine learning (ML) approaches to aid citation screening in an on-going broad and shallow systematic review of preclinical animal studies, with the aim of achieving a high performing algorithm comparable to human screening. We applied ML approaches to a broad systematic review of animal models of depression at the citation screening stage. We tested two independently developed ML approaches which used different classification models and feature sets. We recorded the performance of the ML approaches on an unseen validation set of papers using sensitivity, specificity and accuracy. We aimed to achieve 95% sensitivity and to maximise specificity. The classification model providing the most accurate predictions was applied to the remaining unseen records in the dataset and will be used in the next stage of the preclinical biomedical sciences systematic review. We used a cross validation technique to assign ML inclusion likelihood scores to the human screened records, to identify potential errors made during the human screening process (error analysis). ML approaches reached 98.7% sensitivity based on learning from a training set of 5749 records, with an inclusion prevalence of 13.2%. The highest level of specificity reached was 86%. Performance was assessed on an independent validation dataset. Human errors in the training and validation sets were successfully identified using assigned the inclusion likelihood from the ML model to highlight discrepancies. Training the ML algorithm on the corrected dataset improved the specificity of the algorithm without compromising sensitivity. Error analysis correction leads to a 3% improvement in sensitivity and specificity, which increases precision and accuracy of the ML algorithm. This work has confirmed the performance and application of ML algorithms for screening in systematic reviews of preclinical animal studies. It has highlighted the novel use of ML algorithms to identify human error. This needs to be confirmed in other reviews, , but represents a promising approach to integrating human decisions and automation in systematic review methodology.
Publisher: Wiley
Date: 12-2016
DOI: 10.1002/EBM2.24
Publisher: Portico
Date: 03-2021
Publisher: Pensoft Publishers
Date: 08-12-2022
DOI: 10.3897/RIO.8.E98457
Abstract: Lack of reproducibility of research results has become a major theme in recent years. As we emerge from the COVID-19 pandemic, economic pressures and exposed consequences of lack of societal trust in science make addressing reproducibility of urgent importance. TIER2 is a new international project funded by the European Commission under their Horizon Europe programme. Covering three broad research areas (social, life and computer sciences) and two cross-disciplinary stakeholder groups (research publishers and funders) to systematically investigate reproducibility across contexts, TIER2 will significantly boost knowledge on reproducibility, create tools, engage communities, implement interventions and policy across different contexts to increase re-use and overall quality of research results in the European Research Area and global R& I, and consequently increase trust, integrity and efficiency in research.
Publisher: Center for Open Science
Date: 18-08-2022
Abstract: Systematic reviews and meta-analysis are the cornerstones of evidence-based decision making and priority setting. However, traditional systematic reviews are time and labour intensive, limiting their feasibility to comprehensively evaluate the latest evidence in research-intensive areas. Recent developments in automation, machine learning and systematic review technologies have enabled efficiency gains. Building upon these advances, we developed Systematic Online Living Evidence Summaries (SOLES) to accelerate evidence synthesis. In this approach, we integrate automated processes to continuously gather, synthesise and summarise all existing evidence from a research domain, and report the resulting current curated content as interrogatable databases via interactive web applications. SOLES can benefit various stakeholders by (i) providing a systematic overview of current evidence to identify knowledge gaps, (ii) providing an accelerated starting point for a more detailed systematic review, and (iii) facilitating collaboration and coordination in evidence synthesis.
Publisher: Elsevier BV
Date: 05-2020
Publisher: Center for Open Science
Date: 28-05-2023
Abstract: Across disciplines, researchers increasingly recognize that open science and reproducible research practices may accelerate scientific progress by allowing others to reuse research outputs and by promoting rigorous research that is more likely to yield trustworthy results. While initiatives, training programs, and funder policies encourage researchers to adopt reproducible research and open science practices, these practices are uncommon in many fields. Researchers need training to integrate these practices into their daily work. We organized a virtual brainstorming event, in collaboration with the German Reproducibility Network, to discuss strategies for making reproducible research and open science training the norm at research institutions. Here, we outline eleven strategies, concentrated in three areas: (1) offering training, (2) adapting research assessment criteria and program requirements, and (3) building communities. We provide a brief overview of each strategy, offer tips for implementation, and provide links to resources. Our goal is to encourage members of the research community to think creatively about the many ways they can contribute and collaborate to build communities, and make reproducible research and open science training the norm. Researchers may act in their roles as scientists, supervisors, mentors, instructors, and members of curriculum, hiring or evaluation committees. Institutional leadership and research administration and support staff can accelerate progress by implementing change across their institutions.
Location: United Kingdom of Great Britain and Northern Ireland
No related grants have been discovered for Alexandra Bannach-Brown.