ORCID Profile
0000-0002-7419-9855
Current Organisation
CEA Saclay
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
Publisher: SAGE Publications
Date: 07-2023
Publisher: American Psychological Association (APA)
Date: 05-2020
DOI: 10.1037/BUL0000220
Abstract: To what extent are research results influenced by subjective decisions that scientists make as they design studies? Fifteen research teams independently designed studies to answer five original research questions related to moral judgments, negotiations, and implicit cognition. Participants from 2 separate large s les (total N > 15,000) were then randomly assigned to complete 1 version of each study. Effect sizes varied dramatically across different sets of materials designed to test the same hypothesis: Materials from different teams rendered statistically significant effects in opposite directions for 4 of 5 hypotheses, with the narrowest range in estimates being d = -0.37 to + 0.26. Meta-analysis and a Bayesian perspective on the results revealed overall support for 2 hypotheses and a lack of support for 3 hypotheses. Overall, practically none of the variability in effect sizes was attributable to the skill of the research team in designing materials, whereas considerable variability was attributable to the hypothesis being tested. In a forecasting survey, predictions of other scientists were significantly correlated with study results, both across and within hypotheses. Crowdsourced testing of research hypotheses helps reveal the true consistency of empirical support for a scientific claim. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
Publisher: Frontiers Media SA
Date: 15-05-2018
Publisher: Elsevier BV
Date: 2020
DOI: 10.2139/SSRN.3654406
Publisher: PeerJ
Date: 11-04-2018
DOI: 10.7287/PEERJ.PREPRINTS.3411V2
Abstract: We argue that making accept/reject decisions on scientific hypotheses, including a recent call for changing the canonical alpha level from p = .05 to .005, is deleterious for the finding of new discoveries and the progress of science. Given that blanket and variable alpha levels both are problematic, it is sensible to dispense with significance testing altogether. There are alternatives that address study design and s le size much more directly than significance testing does but none of the statistical tools should be taken as the new magic method giving clear-cut mechanical answers. Inference should not be based on single studies at all, but on cumulative evidence from multiple independent studies. When evaluating the strength of the evidence, we should consider, for ex le, auxiliary assumptions, the strength of the experimental design, and implications for applications. To boil all this down to a binary decision based on a p -value threshold of .05, .01, .005, or anything else, is not acceptable.
Publisher: PeerJ
Date: 26-07-2018
DOI: 10.7287/PEERJ.PREPRINTS.3411V3
Abstract: We argue that making accept/reject decisions on scientific hypotheses, including a recent call for changing the canonical alpha level from p = .05 to .005, is deleterious for the finding of new discoveries and the progress of science. Given that blanket and variable alpha levels both are problematic, it is sensible to dispense with significance testing altogether. There are alternatives that address study design and s le size much more directly than significance testing does but none of the statistical tools should be taken as the new magic method giving clear-cut mechanical answers. Inference should not be based on single studies at all, but on cumulative evidence from multiple independent studies. When evaluating the strength of the evidence, we should consider, for ex le, auxiliary assumptions, the strength of the experimental design, and implications for applications. To boil all this down to a binary decision based on a p -value threshold of .05, .01, .005, or anything else, is not acceptable.
Publisher: PeerJ
Date: 14-11-2017
DOI: 10.7287/PEERJ.PREPRINTS.3411V1
Abstract: We argue that depending on p-values to reject null hypotheses, including a recent call for changing the canonical alpha level for statistical significance from .05 to .005, is deleterious for the finding of new discoveries and the progress of science. Given that blanket and variable criterion levels both are problematic, it is sensible to dispense with significance testing altogether. There are alternatives that address study design and determining s le sizes much more directly than significance testing does but none of the statistical tools should replace significance testing as the new magic method giving clear-cut mechanical answers. Inference should not be based on single studies at all, but on cumulative evidence from multiple independent studies. When evaluating the strength of the evidence, we should consider, for ex le, auxiliary assumptions, the strength of the experimental design, or implications for applications. To boil all this down to a binary decision based on a p-value threshold of .05, .01, .005, or anything else, is not acceptable.
No related grants have been discovered for Ladislas Nalborczyk.