ORCID Profile
0000-0003-3925-3833
Current Organisations
University of Amsterdam
,
CSIRO Clayton
,
Centrum Wiskunde en Informatica
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
Publisher: Wiley
Date: 27-10-2021
DOI: 10.1002/SIM.9170
Abstract: We outline a Bayesian model‐averaged (BMA) meta‐analysis for standardized mean differences in order to quantify evidence for both treatment effectiveness and across‐study heterogeneity . We construct four competing models by orthogonally combining two present‐absent assumptions, one for the treatment effect and one for across‐study heterogeneity. To inform the choice of prior distributions for the model parameters, we used 50% of the Cochrane Database of Systematic Reviews to specify rival prior distributions for and . The relative predictive performance of the competing models and rival prior distributions was assessed using the remaining 50% of the Cochrane Database. On average, —the model that assumes the presence of a treatment effect as well as across‐study heterogeneity—outpredicted the other models, but not by a large margin. Within , predictive adequacy was relatively constant across the rival prior distributions. We propose specific empirical prior distributions, both for the field in general and for each of 46 specific medical subdisciplines. An ex le from oral health demonstrates how the proposed prior distributions can be used to conduct a BMA meta‐analysis in the open‐source software R and JASP. The preregistered analysis plan is available at osf.io/zs3df/ .
Publisher: Foundation for Open Access Statistic
Date: 2019
Publisher: Frontiers Media SA
Date: 24-04-2015
Publisher: Elsevier BV
Date: 09-2016
Publisher: University of California Press
Date: 2017
DOI: 10.1525/COLLABRA.78
Abstract: Whenever parameter estimates are uncertain or observations are contaminated by measurement error, the Pearson correlation coefficient can severely underestimate the true strength of an association. Various approaches exist for inferring the correlation in the presence of estimation uncertainty and measurement error, but none are routinely applied in psychological research. Here we focus on a Bayesian hierarchical model proposed by Behseta, Berdyyeva, Olson, and Kass (2009) that allows researchers to infer the underlying correlation between error-contaminated observations. We show that this approach may be also applied to obtain the underlying correlation between uncertain parameter estimates as well as the correlation between uncertain parameter estimates and noisy observations. We illustrate the Bayesian modeling of correlations with two empirical data sets in each data set, we first infer the posterior distribution of the underlying correlation and then compute Bayes factors to quantify the evidence that the data provide for the presence of an association.
Publisher: Springer Science and Business Media LLC
Date: 10-2015
DOI: 10.3758/S13428-014-0517-4
Abstract: The power fallacy refers to the misconception that what holds on average -across an ensemble of hypothetical experiments- also holds for each case in idually. According to the fallacy, high-power experiments always yield more informative data than do low-power experiments. Here we expose the fallacy with concrete ex les, demonstrating that a particular outcome from a high-power experiment can be completely uninformative, whereas a particular outcome from a low-power experiment can be highly informative. Although power is useful in planning an experiment, it is less useful-and sometimes even misleading-for making inferences from observed data. To make inferences from data, we recommend the use of likelihood ratios or Bayes factors, which are the extension of likelihood ratios beyond point hypotheses. These methods of inference do not average over hypothetical replications of an experiment, but instead condition on the data that have actually been observed. In this way, likelihood ratios and Bayes factors rationally quantify the evidence that a particular data set provides for or against the null or any other hypothesis.
Publisher: Springer Science and Business Media LLC
Date: 06-07-2018
Publisher: Elsevier BV
Date: 08-2017
Publisher: Elsevier BV
Date: 12-2023
Publisher: American Psychological Association (APA)
Date: 06-2023
DOI: 10.1037/MET0000454
Abstract: The last 25 years have shown a steady increase in attention for the Bayes factor as a tool for hypothesis evaluation and model selection. The present review highlights the potential of the Bayes factor in psychological research. We discuss six types of applications: Bayesian evaluation of point null, interval, and informative hypotheses, Bayesian evidence synthesis, Bayesian variable selection and model averaging, and Bayesian evaluation of cognitive models. We elaborate what each application entails, give illustrative ex les, and provide an overview of key references and software with links to other applications. The article is concluded with a discussion of the opportunities and pitfalls of Bayes factor applications and a sketch of corresponding future research lines. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
Publisher: Informa UK Limited
Date: 28-05-2020
Publisher: Springer Science and Business Media LLC
Date: 11-10-2016
Abstract: We present the data from a crowdsourced project seeking to replicate findings in independent laboratories before (rather than after) they are published. In this Pre-Publication Independent Replication (PPIR) initiative, 25 research groups attempted to replicate 10 moral judgment effects from a single laboratory’s research pipeline of unpublished findings. The 10 effects were investigated using online/lab surveys containing psychological manipulations (vignettes) followed by questionnaires. Results revealed a mix of reliable, unreliable, and culturally moderated findings. Unlike any previous replication project, this dataset includes the data from not only the replications but also from the original studies, creating a unique corpus that researchers can use to better understand reproducibility and irreproducibility in science.
Publisher: American Psychological Association (APA)
Date: 04-2023
DOI: 10.1037/MET0000411
Abstract: Hypotheses concerning the distribution of multinomial proportions typically entail exact equality constraints that can be evaluated using standard tests. Whenever researchers formulate inequality constrained hypotheses, however, they must rely on s ling-based methods that are relatively inefficient and computationally expensive. To address this problem we developed a bridge s ling routine that allows an efficient evaluation of multinomial inequality constraints. An empirical application showcases that bridge s ling outperforms current Bayesian methods, especially when relatively little posterior mass falls in the restricted parameter space. The method is extended to mixtures between equality and inequality constrained hypotheses. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
Publisher: Springer Science and Business Media LLC
Date: 04-08-2018
Publisher: SAGE Publications
Date: 13-08-2018
Abstract: Across the social sciences, researchers have overwhelmingly used the classical statistical paradigm to draw conclusions from data, often focusing heavily on a single number: p. Recent years, however, have witnessed a surge of interest in an alternative statistical paradigm: Bayesian inference, in which probabilities are attached to parameters and models. We feel it is informative to provide statistical conclusions that go beyond a single number, and—regardless of one’s statistical preference—it can be prudent to report the results from both the classical and the Bayesian paradigms. In order to promote a more inclusive and insightful approach to statistical inference, we show how the Summary Stats module in the open-source software program JASP ( jasp-stats.org ) can provide comprehensive Bayesian reanalyses from just a few commonly reported summary statistics, such as t and N. These Bayesian reanalyses allow researchers—and also editors, reviewers, readers, and reporters—to (a) quantify evidence on a continuous scale using Bayes factors, (b) assess the robustness of that evidence to changes in the prior distribution, and (c) gauge which posterior parameter ranges are more credible than others by examining the posterior distribution of the effect size. The procedure is illustrated using Festinger and Carlsmith’s (1959) seminal study on cognitive dissonance.
Publisher: Elsevier BV
Date: 12-2017
No related grants have been discovered for Alexander Ly.