ORCID Profile
0000-0001-5510-6943
Current Organisation
University of Newcastle Australia
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
Publisher: Foundation for Open Access Statistic
Date: 2019
Publisher: Springer Science and Business Media LLC
Date: 04-04-2022
DOI: 10.3758/S13423-022-02074-4
Abstract: Bayesian inference requires the specification of prior distributions that quantify the pre-data uncertainty about parameter values. One way to specify prior distributions is through prior elicitation, an interview method guiding field experts through the process of expressing their knowledge in the form of a probability distribution. However, prior distributions elicited from experts can be subject to idiosyncrasies of experts and elicitation procedures, raising the spectre of subjectivity and prejudice. Here, we investigate the effect of interpersonal variation in elicited prior distributions on the Bayes factor hypothesis test. We elicited prior distributions from six academic experts with a background in different fields of psychology and applied the elicited prior distributions as well as commonly used default priors in a re-analysis of 1710 studies in psychology. The degree to which the Bayes factors vary as a function of the different prior distributions is quantified by three measures of concordance of evidence: We assess whether the prior distributions change the Bayes factor direction, whether they cause a switch in the category of evidence strength, and how much influence they have on the value of the Bayes factor. Our results show that although the Bayes factor is sensitive to changes in the prior distribution, these changes do not necessarily affect the qualitative conclusions of a hypothesis test. We hope that these results help researchers gauge the influence of interpersonal variation in elicited prior distributions in future psychological studies. Additionally, our sensitivity analyses can be used as a template for Bayesian robustness analyses that involve prior elicitation from multiple experts.
Publisher: Springer Science and Business Media LLC
Date: 09-10-2020
DOI: 10.3758/S13423-020-01798-5
Abstract: Despite the increasing popularity of Bayesian inference in empirical research, few practical guidelines provide detailed recommendations for how to apply Bayesian procedures and interpret the results. Here we offer specific guidelines for four different stages of Bayesian statistical reasoning in a research setting: planning the analysis, executing the analysis, interpreting the results, and reporting the results. The guidelines for each stage are illustrated with a running ex le. Although the guidelines are geared towards analyses performed with the open-source statistical software JASP, most guidelines extend to Bayesian inference in general.
Publisher: Informa UK Limited
Date: 20-03-2019
Publisher: Elsevier BV
Date: 12-2023
Publisher: Springer Science and Business Media LLC
Date: 28-06-2018
DOI: 10.3758/S13423-017-1317-5
Abstract: In this guide, we present a reading list to serve as a concise introduction to Bayesian data analysis. The introduction is geared toward reviewers, editors, and interested researchers who are new to Bayesian statistics. We provide commentary for eight recommended sources, which together cover the theoretical and practical cornerstones of Bayesian statistics in psychology and related sciences. The resources are presented in an incremental order, starting with theoretical foundations and moving on to applied issues. In addition, we outline an additional 32 articles and books that can be consulted to gain background knowledge about various theoretical specifics and Bayesian approaches to frequently used models. Our goal is to offer researchers a starting point for understanding the core tenets of Bayesian analysis, while requiring a low level of time commitment. After consulting our guide, the reader should understand how and why Bayesian methods work, and feel able to evaluate their use in the behavioral and social sciences.
Publisher: American Psychological Association (APA)
Date: 04-2023
DOI: 10.1037/MET0000411
Abstract: Hypotheses concerning the distribution of multinomial proportions typically entail exact equality constraints that can be evaluated using standard tests. Whenever researchers formulate inequality constrained hypotheses, however, they must rely on s ling-based methods that are relatively inefficient and computationally expensive. To address this problem we developed a bridge s ling routine that allows an efficient evaluation of multinomial inequality constraints. An empirical application showcases that bridge s ling outperforms current Bayesian methods, especially when relatively little posterior mass falls in the restricted parameter space. The method is extended to mixtures between equality and inequality constrained hypotheses. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
Publisher: Springer Science and Business Media LLC
Date: 16-02-2023
DOI: 10.1007/S42113-022-00160-3
Abstract: van Doorn et al. (2021) outlined various questions that arise when conducting Bayesian model comparison for mixed effects models. Seven response articles offered their own perspective on the preferred setup for mixed model comparison, on the most appropriate specification of prior distributions, and on the desirability of default recommendations. This article presents a round-table discussion that aims to clarify outstanding issues, explore common ground, and outline practical considerations for any researcher wishing to conduct a Bayesian mixed effects model comparison.
Publisher: Informa UK Limited
Date: 05-01-2018
Publisher: Springer Science and Business Media LLC
Date: 21-11-2020
DOI: 10.3758/S13428-019-01290-6
Abstract: Over the last decade, the Bayesian estimation of evidence-accumulation models has gained popularity, largely due to the advantages afforded by the Bayesian hierarchical framework. Despite recent advances in the Bayesian estimation of evidence-accumulation models, model comparison continues to rely on suboptimal procedures, such as posterior parameter inference and model selection criteria known to favor overly complex models. In this paper, we advocate model comparison for evidence-accumulation models based on the Bayes factor obtained via Warp-III bridge s ling. We demonstrate, using the linear ballistic accumulator (LBA), that Warp-III s ling provides a powerful and flexible approach that can be applied to both nested and non-nested model comparisons, even in complex and high-dimensional hierarchical instantiations of the LBA. We provide an easy-to-use software implementation of the Warp-III s ler and outline a series of recommendations aimed at facilitating the use of Warp-III s ling in practical applications.
Publisher: SAGE Publications
Date: 13-08-2018
Abstract: Across the social sciences, researchers have overwhelmingly used the classical statistical paradigm to draw conclusions from data, often focusing heavily on a single number: p. Recent years, however, have witnessed a surge of interest in an alternative statistical paradigm: Bayesian inference, in which probabilities are attached to parameters and models. We feel it is informative to provide statistical conclusions that go beyond a single number, and—regardless of one’s statistical preference—it can be prudent to report the results from both the classical and the Bayesian paradigms. In order to promote a more inclusive and insightful approach to statistical inference, we show how the Summary Stats module in the open-source software program JASP ( jasp-stats.org ) can provide comprehensive Bayesian reanalyses from just a few commonly reported summary statistics, such as t and N. These Bayesian reanalyses allow researchers—and also editors, reviewers, readers, and reporters—to (a) quantify evidence on a continuous scale using Bayes factors, (b) assess the robustness of that evidence to changes in the prior distribution, and (c) gauge which posterior parameter ranges are more credible than others by examining the posterior distribution of the effect size. The procedure is illustrated using Festinger and Carlsmith’s (1959) seminal study on cognitive dissonance.
Publisher: Springer Science and Business Media LLC
Date: 14-01-2019
Publisher: Frontiers Media SA
Date: 15-09-2015
Publisher: SAGE Publications
Date: 27-10-2016
Abstract: According to the facial feedback hypothesis, people’s affective responses can be influenced by their own facial expression (e.g., smiling, pouting), even when their expression did not result from their emotional experiences. For ex le, Strack, Martin, and Stepper (1988) instructed participants to rate the funniness of cartoons using a pen that they held in their mouth. In line with the facial feedback hypothesis, when participants held the pen with their teeth (inducing a “smile”), they rated the cartoons as funnier than when they held the pen with their lips (inducing a “pout”). This seminal study of the facial feedback hypothesis has not been replicated directly. This Registered Replication Report describes the results of 17 independent direct replications of Study 1 from Strack et al. (1988), all of which followed the same vetted protocol. A meta-analysis of these studies examined the difference in funniness ratings between the “smile” and “pout” conditions. The original Strack et al. (1988) study reported a rating difference of 0.82 units on a 10-point Likert scale. Our meta-analysis revealed a rating difference of 0.03 units with a 95% confidence interval ranging from −0.11 to 0.16.
Publisher: SAGE Publications
Date: 06-2020
Abstract: Many statistical scenarios initially involve several candidate models that describe the data-generating process. Analysis often proceeds by first selecting the best model according to some criterion and then learning about the parameters of this selected model. Crucially, however, in this approach the parameter estimates are conditioned on the selected model, and any uncertainty about the model-selection process is ignored. An alternative is to learn the parameters for all candidate models and then combine the estimates according to the posterior probabilities of the associated models. This approach is known as Bayesian model averaging (BMA). BMA has several important advantages over all-or-none selection methods, but has been used only sparingly in the social sciences. In this conceptual introduction, we explain the principles of BMA, describe its advantages over all-or-none model selection, and showcase its utility in three ex les: analysis of covariance, meta-analysis, and network analysis.
Publisher: Springer Science and Business Media LLC
Date: 06-07-2018
Publisher: SAGE Publications
Date: 07-2021
DOI: 10.1177/25152459211031256
Abstract: Meta-analysis is the predominant approach for quantitatively synthesizing a set of studies. If the studies themselves are of high quality, meta-analysis can provide valuable insights into the current scientific state of knowledge about a particular phenomenon. In psychological science, the most common approach is to conduct frequentist meta-analysis. In this primer, we discuss an alternative method, Bayesian model-averaged meta-analysis. This procedure combines the results of four Bayesian meta-analysis models: (a) fixed-effect null hypothesis, (b) fixed-effect alternative hypothesis, (c) random-effects null hypothesis, and (d) random-effects alternative hypothesis. These models are combined according to their plausibilities given the observed data to address the two key questions “Is the overall effect nonzero?” and “Is there between-study variability in effect size?” Bayesian model-averaged meta-analysis therefore avoids the need to select either a fixed-effect or random-effects model and instead takes into account model uncertainty in a principled manner.
Publisher: Wiley
Date: 12-12-2022
DOI: 10.1002/SIM.9278
Abstract: Testing the equality of two proportions is a common procedure in science, especially in medicine and public health. In these domains, it is crucial to be able to quantify evidence for the absence of a treatment effect. Bayesian hypothesis testing by means of the Bayes factor provides one avenue to do so, requiring the specification of prior distributions for parameters. The most popular analysis approach views the comparison of proportions from a contingency table perspective, assigning prior distributions directly to the two proportions. Another, less popular approach views the problem from a logistic regression perspective, assigning prior distributions to logit‐transformed parameters. Reanalyzing 39 null results from the New England Journal of Medicine with both approaches, we find that they can lead to markedly different conclusions, especially when the observed proportions are at the extremes (ie, very low or very high). We explain these stark differences and provide recommendations for researchers interested in testing the equality of two proportions and users of Bayes factors more generally. The test that assigns prior distributions to logit‐transformed parameters creates prior dependence between the two proportions and yields weaker evidence when the observations are at the extremes. When comparing two proportions, we argue that this test should become the new default.
Publisher: Springer Science and Business Media LLC
Date: 09-04-2021
DOI: 10.3758/S13428-021-01552-2
Abstract: Linear regression analyses commonly involve two consecutive stages of statistical inquiry. In the first stage, a single ‘best’ model is defined by a specific selection of relevant predictors in the second stage, the regression coefficients of the winning model are used for prediction and for inference concerning the importance of the predictors. However, such second-stage inference ignores the model uncertainty from the first stage, resulting in overconfident parameter estimates that generalize poorly. These drawbacks can be overcome by model averaging, a technique that retains all models for inference, weighting each model’s contribution by its posterior probability. Although conceptually straightforward, model averaging is rarely used in applied research, possibly due to the lack of easily accessible software. To bridge the gap between theory and practice, we provide a tutorial on linear regression using Bayesian model averaging in , based on the BAS package in . Firstly, we provide theoretical background on linear regression, Bayesian inference, and Bayesian model averaging. Secondly, we demonstrate the method on an ex le data set from the World Happiness Report. Lastly, we discuss limitations of model averaging and directions for dealing with violations of model assumptions.
Publisher: Springer Science and Business Media LLC
Date: 11-10-2016
Abstract: We present the data from a crowdsourced project seeking to replicate findings in independent laboratories before (rather than after) they are published. In this Pre-Publication Independent Replication (PPIR) initiative, 25 research groups attempted to replicate 10 moral judgment effects from a single laboratory’s research pipeline of unpublished findings. The 10 effects were investigated using online/lab surveys containing psychological manipulations (vignettes) followed by questionnaires. Results revealed a mix of reliable, unreliable, and culturally moderated findings. Unlike any previous replication project, this dataset includes the data from not only the replications but also from the original studies, creating a unique corpus that researchers can use to better understand reproducibility and irreproducibility in science.
Publisher: American Psychological Association (APA)
Date: 11-05-2023
DOI: 10.1037/MET0000582
Publisher: Elsevier BV
Date: 2020
DOI: 10.2139/SSRN.3654406
Publisher: SAGE Publications
Date: 14-09-2017
Publisher: Springer Science and Business Media LLC
Date: 04-02-2019
Publisher: Public Library of Science (PLoS)
Date: 19-02-2021
DOI: 10.1371/JOURNAL.PONE.0245048
Abstract: Gautret and colleagues reported the results of a non-randomised case series which examined the effects of hydroxychloroquine and azithromycin on viral load in the upper respiratory tract of Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) patients. The authors reported that hydroxychloroquine (HCQ) had significant virus reducing effects, and that dual treatment of both HCQ and azithromycin further enhanced virus reduction. In light of criticisms regarding how patients were excluded from analyses, we reanalysed the original data to interrogate the main claims of the paper. We applied Bayesian statistics to assess the robustness of the original paper’s claims by testing four variants of the data: 1) The original data 2) Data including patients who deteriorated 3) Data including patients who deteriorated with exclusion of untested patients in the comparison group 4) Data that includes patients who deteriorated with the assumption that untested patients were negative. To ask if HCQ monotherapy was effective, we performed an A/B test for a model which assumes a positive effect, compared to a model of no effect. We found that the statistical evidence was highly sensitive to these data variants. Statistical evidence for the positive effect model ranged from strong for the original data (BF +0 ~11), to moderate when including patients who deteriorated (BF +0 ~4.35), to anecdotal when excluding untested patients (BF +0 ~2), and to anecdotal negative evidence if untested patients were assumed positive (BF +0 ~0.6). The fact that the patient inclusions and exclusions are not well justified nor adequately reported raises substantial uncertainty about the interpretation of the evidence obtained from the original paper.
Publisher: Elsevier BV
Date: 09-2016
Publisher: Springer Science and Business Media LLC
Date: 15-01-2019
Publisher: Springer Science and Business Media LLC
Date: 22-04-2020
Publisher: Springer Science and Business Media LLC
Date: 26-04-2023
DOI: 10.3758/S13428-023-02093-6
Abstract: Researchers conduct meta-analyses in order to synthesize information across different studies. Compared to standard meta-analytic methods, Bayesian model-averaged meta-analysis offers several practical advantages including the ability to quantify evidence in favor of the absence of an effect, the ability to monitor evidence as in idual studies accumulate indefinitely, and the ability to draw inferences based on multiple models simultaneously. This tutorial introduces the concepts and logic underlying Bayesian model-averaged meta-analysis and illustrates its application using the open-source software JASP. As a running ex le, we perform a Bayesian meta-analysis on language development in children. We show how to conduct a Bayesian model-averaged meta-analysis and how to interpret the results.
Publisher: Cold Spring Harbor Laboratory
Date: 03-04-2020
DOI: 10.1101/2020.03.31.20048777
Abstract: Gautret and colleagues reported results of a non-randomised open-label case series which examined the effects of hydroxychloroquine and azithromycin on viral load in the upper respiratory tract of Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) patients. The authors report that hydroxychloroquine (HCQ) had significant virus reducing effects, and that dual treatment of both HCQ and azithromycin further enhanced virus reduction. These data have triggered speculation whether these drugs should be considered as candidates for the treatment of severe COVID-19. However, questions have been raised regarding the study’s data integrity, statistical analyses, and experimental design. We therefore reanalysed the original data to interrogate the main claims of the paper. Here we apply Bayesian statistics to assess the robustness of the original paper’s claims by testing four variants of the data: 1) The original data 2) Data including patients who deteriorated 3) Data including patients who deteriorated with exclusion of untested patients in the comparison group 4) Data that includes patients who deteriorated with the assumption that untested patients were negative. To ask if HCQ monotherapy is effective, we performed an A/B test for a model which assumes a positive effect, compared to a model of no effect. We find that the statistical evidence is highly sensitive to these data variants. Statistical evidence for the positive effect model ranged from strong for the original data (BF +0 ∼11), to moderate when including patients who deteriorated (BF +0 ∼4.35), to anecdotal when excluding untested patients (BF +0 ∼2), and to anecdotal negative evidence if untested patients were assumed positive (BF +0 ∼0.6). To assess whether HCQ is more effective when combined with AZ, we performed the same tests, and found only anecdotal evidence for the positive effect model for the original data (BF +0 ∼2.8), and moderate evidence for all other variants of the data (BF +0 ∼5.6). Our analyses only explore the effects of different assumptions about excluded and untested patients. These assumptions are not adequately reported, nor are they justified in the original paper, and we find that varying them causes substantive changes to the evidential support for the main claims of the original paper. This statistical uncertainty is exacerbated by the fact that the treatments were not randomised, and subject to several confounding variables including the patients consent to treatment, different care centres, and clinical decision-making. Furthermore, while the viral load measurements were noisy, showing multiple reversals between test outcomes, there is greater certainty around other clinical outcomes such as the 4 patients who seriously deteriorated. The fact that all of these belonged to the HCQ group should be assigned greater weight when evaluating the potential clinical efficacy of HCQ. Randomised controlled trials are currently underway, and will be critical in resolving this uncertainty as to whether HCQ and AZ are effective as a treatment for COVID-19. There have been reports of people self-administering chloroquine phosphate (intended for treatment of disease in aquarium fish), which has led to at least one death and one serious illness. We state that under no circumstances should people self-administer hydroxychloroquine, chloroquine phosphate, azithromycin, or anything similar-sounding, or indeed any other drug, unless approved by a medical doctor. The FDA has issued a specific warning: nimal-veterinary roduct-safety-information/fda-letter-stakeholders-do-not-use-chloroquine-phosphate-intended-fish-treatment-covid-19-humans
Publisher: Springer Science and Business Media LLC
Date: 07-02-2022
DOI: 10.1007/S10670-019-00209-Z
Abstract: A frequentist confidence interval can be constructed by inverting a hypothesis test, such that the interval contains only parameter values that would not have been rejected by the test. We show how a similar definition can be employed to construct a Bayesian support interval. Consistent with Carnap’s theory of corroboration, the support interval contains only parameter values that receive at least some minimum amount of support from the data. The support interval is not subject to Lindley’s paradox and provides an evidence-based perspective on inference that differs from the belief-based perspective that forms the basis of the standard Bayesian credible interval.
Publisher: American Psychological Association (APA)
Date: 05-2020
DOI: 10.1037/BUL0000220
Abstract: To what extent are research results influenced by subjective decisions that scientists make as they design studies? Fifteen research teams independently designed studies to answer five original research questions related to moral judgments, negotiations, and implicit cognition. Participants from 2 separate large s les (total N > 15,000) were then randomly assigned to complete 1 version of each study. Effect sizes varied dramatically across different sets of materials designed to test the same hypothesis: Materials from different teams rendered statistically significant effects in opposite directions for 4 of 5 hypotheses, with the narrowest range in estimates being d = -0.37 to + 0.26. Meta-analysis and a Bayesian perspective on the results revealed overall support for 2 hypotheses and a lack of support for 3 hypotheses. Overall, practically none of the variability in effect sizes was attributable to the skill of the research team in designing materials, whereas considerable variability was attributable to the hypothesis being tested. In a forecasting survey, predictions of other scientists were significantly correlated with study results, both across and within hypotheses. Crowdsourced testing of research hypotheses helps reveal the true consistency of empirical support for a scientific claim. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
Publisher: Springer Science and Business Media LLC
Date: 26-08-2019
Publisher: Springer Science and Business Media LLC
Date: 16-05-2023
DOI: 10.1007/S42113-022-00129-2
Abstract: Statistical modeling is generally meant to describe patterns in data in service of the broader scientific goal of developing theories to explain those patterns. Statistical models support meaningful inferences when models are built so as to align parameters of the model with potential causal mechanisms and how they manifest in data. When statistical models are instead based on assumptions chosen by default, attempts to draw inferences can be uninformative or even paradoxical—in essence, the tail is trying to wag the dog. These issues are illustrated by van Doorn et al. (this issue) in the context of using Bayes Factors to identify effects and interactions in linear mixed models. We show that the problems identified in their applications (along with other problems identified here) can be circumvented by using priors over inherently meaningful units instead of default priors on standardized scales. This case study illustrates how researchers must directly engage with a number of substantive issues in order to support meaningful inferences, of which we highlight two: The first is the problem of coordination , which requires a researcher to specify how the theoretical constructs postulated by a model are functionally related to observable variables. The second is the problem of generalization , which requires a researcher to consider how a model may represent theoretical constructs shared across similar but non-identical situations, along with the fact that model comparison metrics like Bayes Factors do not directly address this form of generalization. For statistical modeling to serve the goals of science, models cannot be based on default assumptions, but should instead be based on an understanding of their coordination function and on how they represent causal mechanisms that may be expected to generalize to other related scenarios.
Publisher: Springer Science and Business Media LLC
Date: 27-11-2019
Publisher: Informa UK Limited
Date: 02-01-2017
Publisher: Foundation for Open Access Statistic
Date: 2020
Publisher: SAGE Publications
Date: 17-07-2018
Abstract: In the traditional statistical framework, nonsignificant results leave researchers in a state of suspended disbelief. In this study, we examined, empirically, the treatment and evidential impact of nonsignificant results. Our specific goals were twofold: to explore how psychologists interpret and communicate nonsignificant results and to assess how much these results constitute evidence in favor of the null hypothesis. First, we examined all nonsignificant findings mentioned in the abstracts of the 2015 volumes of Psychonomic Bulletin & Review, Journal of Experimental Psychology: General, and Psychological Science ( N = 137). In 72% of these cases, nonsignificant results were misinterpreted, in that the authors inferred that the effect was absent. Second, a Bayes factor reanalysis revealed that fewer than 5% of the nonsignificant findings provided strong evidence (i.e., BF 01 10) in favor of the null hypothesis over the alternative hypothesis. We recommend that researchers expand their statistical tool kit in order to correctly interpret nonsignificant results and to be able to evaluate the evidence for and against the null hypothesis.
Publisher: Cold Spring Harbor Laboratory
Date: 12-07-2023
DOI: 10.1101/2023.07.11.548624
Abstract: The ability to stop simple ongoing actions has been extensively studied using the stop signal task, but less is known about inhibition in more complex scenarios. Here we used a task requiring bimanual responses to go stimuli, but selective inhibition of only one of those responses following a stop signal. We assessed how proactive cues affect the nature of both the responding and stopping processes, and the well-documented “stopping delay” in the continuing action following successful stopping. In this task, estimates of the speed of inhibition based on a simple-stopping model are inappropriate, and have produced inconsistent findings about the effects of proactive control on motor inhibition. We instead used a multi-modal approach, based on improved methods of detecting and interpreting partial electromyographical (EMG) responses and the recently proposed SIS ( simultaneously inhibit and start ) model of selective stopping behaviour. Our results provide clear and converging evidence that proactive cues reduce the stopping delay effect by slowing bimanual responses and speeding unimanual responses, with a negligible effect on the speed of the stopping process.
Publisher: Wiley
Date: 27-10-2021
DOI: 10.1002/SIM.9170
Abstract: We outline a Bayesian model‐averaged (BMA) meta‐analysis for standardized mean differences in order to quantify evidence for both treatment effectiveness and across‐study heterogeneity . We construct four competing models by orthogonally combining two present‐absent assumptions, one for the treatment effect and one for across‐study heterogeneity. To inform the choice of prior distributions for the model parameters, we used 50% of the Cochrane Database of Systematic Reviews to specify rival prior distributions for and . The relative predictive performance of the competing models and rival prior distributions was assessed using the remaining 50% of the Cochrane Database. On average, —the model that assumes the presence of a treatment effect as well as across‐study heterogeneity—outpredicted the other models, but not by a large margin. Within , predictive adequacy was relatively constant across the rival prior distributions. We propose specific empirical prior distributions, both for the field in general and for each of 46 specific medical subdisciplines. An ex le from oral health demonstrates how the proposed prior distributions can be used to conduct a BMA meta‐analysis in the open‐source software R and JASP. The preregistered analysis plan is available at osf.io/zs3df/ .
Publisher: Frontiers Media SA
Date: 24-04-2015
Publisher: Springer Science and Business Media LLC
Date: 08-07-2020
DOI: 10.1007/S42113-020-00082-Y
Abstract: Multidimensional scaling (MDS) models represent stimuli as points in a space consisting of a number of psychological dimensions, such that the distance between pairs of points corresponds to the dissimilarity between the stimuli. Two fundamental challenges in inferring MDS representations from data involve inferring the appropriate number of dimensions and the metric structure of the space used to measure distance. We approach both challenges as Bayesian model-selection problems. Treating MDS as a generative model, we define priors needed for model identifiability under metrics corresponding to psychologically separable and psychologically integral stimulus domains. We then apply a differential evolution Markov-chain Monte Carlo (DE-MCMC) method for parameter inference, and a Warp-III method for model selection. We apply these methods to five previous data sets, which collectively test the ability of the methods to infer an appropriate dimensionality and to infer whether stimuli are psychologically separable or integral. We demonstrate that our methods produce sensible results, but note a number of remaining technical challenges that need to be solved before the method can easily and generally be applied. We also note the theoretical promise of the generative modeling perspective, discussing new and extended models of MDS representation that could be developed.
Publisher: SAGE Publications
Date: 07-2023
DOI: 10.1177/25152459231182318
Abstract: Team-science projects have become the “gold standard” for assessing the replicability and variability of key findings in psychological science. However, we believe the typical meta-analytic approach in these projects fails to match the wealth of collected data. Instead, we advocate the use of Bayesian hierarchical modeling for team-science projects, potentially extended in a multiverse analysis. We illustrate this full-scale analysis by applying it to the recently published Many Labs 4 project. This project aimed to replicate the mortality-salience effect—that being reminded of one’s own death strengthens the own cultural identity. In a multiverse analysis, we assess the robustness of the results with varying data-inclusion criteria and prior settings. Bayesian model comparison results largely converge to a common conclusion: The data provide evidence against a mortality-salience effect across the majority of our analyses. We issue general recommendations to facilitate full-scale analyses in team-science projects.
Publisher: CAIRN
Date: 28-02-2020
Publisher: Springer Science and Business Media LLC
Date: 27-09-2019
Publisher: SAGE Publications
Date: 14-09-2021
Abstract: We conducted a preregistered multilaboratory project ( k = 36 N = 3,531) to assess the size and robustness of ego-depletion effects using a novel replication method, termed the paradigmatic replication approach. Each laboratory implemented one of two procedures that was intended to manipulate self-control and tested performance on a subsequent measure of self-control. Confirmatory tests found a nonsignificant result ( d = 0.06). Confirmatory Bayesian meta-analyses using an informed-prior hypothesis (δ = 0.30, SD = 0.15) found that the data were 4 times more likely under the null than the alternative hypothesis. Hence, preregistered analyses did not find evidence for a depletion effect. Exploratory analyses on the full s le (i.e., ignoring exclusion criteria) found a statistically significant effect ( d = 0.08) Bayesian analyses showed that the data were about equally likely under the null and informed-prior hypotheses. Exploratory moderator tests suggested that the depletion effect was larger for participants who reported more fatigue but was not moderated by trait self-control, willpower beliefs, or action orientation.
Publisher: Informa UK Limited
Date: 28-05-2020
Publisher: Springer Science and Business Media LLC
Date: 09-08-2019
Publisher: Springer Science and Business Media LLC
Date: 04-08-2018
Publisher: Elsevier BV
Date: 12-2017
No related grants have been discovered for Quentin F. Gronau.