ORCID Profile
0000-0002-0018-5573
Current Organisations
University of Amsterdam
,
Universiteit van Amsterdam
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
Publisher: Center for Open Science
Date: 11-05-2023
Abstract: We demonstrate that all conventional meta-analyses of correlation coefficients are biased, explain why, and offer solutions. Because the standard error of the correlation coefficient depends on the size of the coefficient, inverse-variance weighted averages will be biased even under ideal meta-analytical conditions (i.e., absence of publication bias, p-hacking, or other biases). Transformation to Fisher’s z often greatly reduces these biases but still does not mitigate them entirely. Although all are small-s le biases (n & 200), they will often have practical consequences in psychology where the typical s le size of correlational studies is 86. We offer several solutions: a newly developed estimator, UWLS+3 and two small-s le adjustments. UWLS+3 is the unrestricted weighted least squares weighted average (UWLS) that adjusts the degrees of freedom used to calculate correlations and thereby renders any remaining bias scientifically trivial. We also offer a simple small-s le correction, (n-2)/(n-1), for random-effects that works nearly as well as these other adjustments in most applications and a small-s le adjustment, (4n-2)/(4n-1), to Fisher’s z-transformation that is better still.
Publisher: Wiley
Date: 27-10-2021
DOI: 10.1002/SIM.9170
Abstract: We outline a Bayesian model‐averaged (BMA) meta‐analysis for standardized mean differences in order to quantify evidence for both treatment effectiveness and across‐study heterogeneity . We construct four competing models by orthogonally combining two present‐absent assumptions, one for the treatment effect and one for across‐study heterogeneity. To inform the choice of prior distributions for the model parameters, we used 50% of the Cochrane Database of Systematic Reviews to specify rival prior distributions for and . The relative predictive performance of the competing models and rival prior distributions was assessed using the remaining 50% of the Cochrane Database. On average, —the model that assumes the presence of a treatment effect as well as across‐study heterogeneity—outpredicted the other models, but not by a large margin. Within , predictive adequacy was relatively constant across the rival prior distributions. We propose specific empirical prior distributions, both for the field in general and for each of 46 specific medical subdisciplines. An ex le from oral health demonstrates how the proposed prior distributions can be used to conduct a BMA meta‐analysis in the open‐source software R and JASP. The preregistered analysis plan is available at osf.io/zs3df/ .
Publisher: Center for Open Science
Date: 16-10-2020
Abstract: Meta-analyses are essential for cumulative science, but their validity can be compromised by publication bias. In order to mitigate the impact of publication bias, one may apply publication bias adjustment techniques such as PET-PEESE and selection models. Implemented in JASP & R, these methods allow researchers without programming experience to conduct state-of-the-art publication bias adjusted meta-analysis. In this tutorial, we demonstrate how to conduct a publication bias adjusted meta-analysis in JASP & R and interpret the results. First, we explain two frequentist bias correction methods: PET-PEESE and selection models. Second, we introduce robust Bayesian meta-analysis (RoBMA), a Bayesian approach that simultaneously considers both PET-PEESE and selection models. We illustrate the methodology on an ex le data set, provide instructional video (bit.ly ubbias), R-markdown script (osf.io/uhaew/), and discuss the interpretation of the results. Finally, we include concrete guidance on reporting the meta-analytic results in an academic article.
Publisher: Center for Open Science
Date: 12-09-2022
Abstract: How well can social scientists predict societal change, and what processes underlie their predictions? To answer these questions, we ran two forecasting tournaments testing accuracy of predictions of societal change in domains commonly studied in the social sciences: ideological preferences, political polarization, life satisfaction, sentiment on social media, and gender-career and racial bias. Following provision of historical trend data on the domain, social scientists submitted pre-registered monthly forecasts for a year (Tournament 1 N=86 teams/359 forecasts), with an opportunity to update forecasts based on new data six months later (Tournament 2 N=120 teams/546 forecasts). Benchmarking forecasting accuracy revealed that social scientists’ forecasts were on average no more accurate than simple statistical models (historical means, random walk, or linear regressions) or the aggregate forecasts of a s le from the general public (N=802). However, scientists were more accurate if they had scientific expertise in a prediction domain, were interdisciplinary, used simpler models, and based predictions on prior data.
Publisher: Center for Open Science
Date: 12-10-2023
Publisher: Wiley
Date: 07-08-2022
DOI: 10.1002/JRSM.1594
Abstract: Publication bias is a ubiquitous threat to the validity of meta‐analysis and the accumulation of scientific evidence. In order to estimate and counteract the impact of publication bias, multiple methods have been developed however, recent simulation studies have shown the methods' performance to depend on the true data generating process, and no method consistently outperforms the others across a wide range of conditions. Unfortunately, when different methods lead to contradicting conclusions, researchers can choose those methods that lead to a desired outcome. To avoid the condition‐dependent, all‐or‐none choice between competing methods and conflicting results, we extend robust Bayesian meta‐analysis and model‐average across two prominent approaches of adjusting for publication bias: (1) selection models of p ‐values and (2) models adjusting for small‐study effects. The resulting model ensemble weights the estimates and the evidence for the absence resence of the effect from the competing approaches with the support they receive from the data. Applications, simulations, and comparisons to preregistered, multi‐lab replications demonstrate the benefits of Bayesian model‐averaging of complementary publication bias adjustment methods.
Publisher: Informa UK Limited
Date: 06-07-2022
Publisher: Center for Open Science
Date: 10-05-2023
Abstract: In their book `Nudge: Improving Decisions About Health, Wealth and Happiness', Thaler and Sunstein argue that choice architectures are promising public policy interventions. This research programme motivated the creation of so-called `nudge units' which aim to apply insights from behavioural science to improve public policy. We take a close look at a meta-analysis of the evidence gathered by two of the largest and most influential nudge units using statistical techniques to detect reporting biases. We find evidence suggestive of selective reporting on the full dataset, although this pattern is not robust in subgroup analyses. We therefore additionally evaluate the public pre-analysis plans from the Office of Evaluation Sciences. We find that the analysis plans and reporting usually lack sufficient detail to evaluate (unintentional) reporting biases. Nevertheless, we identify several instances of excellent practice. We additionally highlight several improvements that would enhance the effectiveness of the pre-analysis plans and reports as a means to combat reporting biases. We believe our findings and suggestions can further improve the evidence base on which policy decisions are made.
Publisher: SAGE Publications
Date: 07-2022
DOI: 10.1177/25152459221109259
Abstract: Meta-analyses are essential for cumulative science, but their validity can be compromised by publication bias. To mitigate the impact of publication bias, one may apply publication-bias-adjustment techniques such as precision-effect test and precision-effect estimate with standard errors (PET-PEESE) and selection models. These methods, implemented in JASP and R, allow researchers without programming experience to conduct state-of-the-art publication-bias-adjusted meta-analysis. In this tutorial, we demonstrate how to conduct a publication-bias-adjusted meta-analysis in JASP and R and interpret the results. First, we explain two frequentist bias-correction methods: PET-PEESE and selection models. Second, we introduce robust Bayesian meta-analysis, a Bayesian approach that simultaneously considers both PET-PEESE and selection models. We illustrate the methodology on an ex le data set, provide an instructional video ( bit.ly ubbias ) and an R-markdown script ( osf.io/uhaew/ ), and discuss the interpretation of the results. Finally, we include concrete guidance on reporting the meta-analytic results in an academic article.
Publisher: BMJ
Date: 09-2022
DOI: 10.1136/BMJOPEN-2021-059202
Abstract: Physical activity among children and adolescents remains insufficient, despite the substantial efforts made by researchers and policymakers. Identifying and furthering our understanding of potential modifiable determinants of physical activity behaviour (PAB) and sedentary behaviour (SB) is crucial for the development of interventions that promote a shift from SB to PAB. The current protocol details the process through which a series of systematic literature reviews and meta-analyses (MAs) will be conducted to produce a best-evidence statement (BESt) and inform policymakers. The overall aim is to identify modifiable determinants that are associated with changes in PAB and SB in children and adolescents (aged 5–19 years) and to quantify their effect on, or association with, PAB/SB. A search will be performed in MEDLINE, SportDiscus, Web of Science, PsychINFO and Cochrane Central Register of Controlled Trials. Randomised controlled trials (RCTs) and controlled trials (CTs) that investigate the effect of interventions on PAB/SB and longitudinal studies that investigate the associations between modifiable determinants and PAB/SB at multiple time points will be sought. Risk of bias assessments will be performed using adapted versions of Cochrane’s RoB V.2.0 and ROBINS-I tools for RCTs and CTs, respectively, and an adapted version of the National Institute of Health’s tool for longitudinal studies. Data will be synthesised narratively and, where possible, MAs will be performed using frequentist and Bayesian statistics. Modifiable determinants will be discussed considering the settings in which they were investigated and the PAB/SB measurement methods used. No ethical approval is needed as no primary data will be collected. The findings will be disseminated in peer-reviewed publications and academic conferences where possible. The BESt will also be shared with policy makers within the DE-PASS consortium in the first instance. CRD42021282874.
Publisher: Springer Science and Business Media LLC
Date: 09-02-2023
No related grants have been discovered for František Bartoš.