ORCID Profile
0000-0002-5855-3885
Current Organisation
KU Leuven
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
Publisher: Springer Science and Business Media LLC
Date: 08-2004
DOI: 10.3758/BF03195597
Abstract: A data set is described that includes eight variables gathered for 13 common superordinate natural language categories and a representative set of 338 exemplars in Dutch. The category set contains 6 animal categories (reptiles, hibians, mammals, birds, fish, and insects), 3 artifact categories (musical instruments, tools, and vehicles), 2 borderline artifact-natural-kind categories (vegetables and fruit), and 2 activity categories (sports and professions). In an exemplar and a feature generation task for the category nouns, frequency data were collected. For each of the 13 categories, a representative s le of 5-30 exemplars was selected. For all exemplars, feature generation frequencies, typicality ratings, pairwise similarity ratings, age-of-acquisition ratings, word frequencies, and word associations were gathered. Reliability estimates and some additional measures are presented. The full set of these norms is available in Excel format at the Psychonomic Society Web archive, rchive/.
Publisher: University of California Press
Date: 2018
DOI: 10.1525/COLLABRA.158
Abstract: The credibility of scientific claims depends upon the transparency of the research products upon which they are based (e.g., study protocols, data, materials, and analysis scripts). As psychology navigates a period of unprecedented introspection, user-friendly tools and services that support open science have flourished. However, the plethora of decisions and choices involved can be bewildering. Here we provide a practical guide to help researchers navigate the process of preparing and sharing the products of their research (e.g., choosing a repository, preparing their research products for sharing, structuring folders, etc.). Being an open scientist means adopting a few straightforward research management practices, which lead to less error prone, reproducible research workflows. Further, this adoption can be piecemeal – each incremental step towards complete transparency adds positive value. Transparent research practices not only improve the efficiency of in idual researchers, they enhance the credibility of the knowledge generated by the scientific community.
Publisher: Center for Open Science
Date: 14-12-2018
Abstract: Despite its many advocates, Bayesian inference is currently employed by only a minority of social and behavioural scientists. One possible barrier is a lack of consensus on how best to conduct and report such analyses. Employing Bayesian methods involves making choices about prior distributions, likelihood functions and robustness checks, as well as about how to present, visualize and interpret the results (for a glossary of the main Bayesian statistical concepts, see Box 1). Some researchers may find this wide range of choices too daunting to use Bayesian inference in their own study. This paper highlights the areas of agreement and the arguments behind disagreements, established on the back of a self-questionnaire provided and explained in detail on OSF (osf.io/6eqx5/).
Publisher: Springer Science and Business Media LLC
Date: 04-01-2021
Publisher: Center for Open Science
Date: 18-05-2018
Abstract: Over the past 10 years, Oosterhof and Todorov’s valence–dominance model has emerged as the most prominent account of how people evaluate faces on social dimensions. In this model, two dimensions (valence and dominance) underpin social judgements of faces. Because this model has primarily been developed and tested in Western regions, it is unclear whether these findings apply to other regions. We addressed this question by replicating Oosterhof and Todorov’s methodology across 11 world regions, 41 countries and 11,570 participants. When we used Oosterhof and Todorov’s original analysis strategy, the valence–dominance model generalized across regions. When we used an alternative methodology to allow for correlated dimensions, we observed much less generalization. Collectively, these results suggest that, while the valence–dominance model generalizes very well across regions when dimensions are forced to be orthogonal, regional differences are revealed when we use different extraction methods and correlate and rotate the dimension reduction solution.
Publisher: Elsevier BV
Date: 04-2014
Publisher: Springer Science and Business Media LLC
Date: 11-2008
Publisher: Center for Open Science
Date: 20-10-2020
Abstract: The last 25 years have shown a steady increase in attention for the Bayes factor as a tool for hypothesis evaluation and model selection. The present review highlights the potential of the Bayes factor in psychological research. We discuss six types of applications: Bayesian evaluation of point null, interval, and informative hypotheses, Bayesian evidence synthesis, Bayesian variable selection and model averaging, and Bayesian evaluation of cognitive models. We elaborate what each application entails, give illustrative ex les, and provide an overview of key references and software with links to other applications. The paper is concluded with a discussion of the opportunities and pitfalls of Bayes factor applications and a sketch of corresponding future research lines.
Publisher: Springer Science and Business Media LLC
Date: 07-08-2012
DOI: 10.3758/S13423-012-0300-4
Abstract: Formal models in psychology are used to make theoretical ideas precise and allow them to be evaluated quantitatively against data. We focus on one important--but under-used and incorrectly maligned--method for building theoretical assumptions into formal models, offered by the Bayesian statistical approach. This method involves capturing theoretical assumptions about the psychological variables in models by placing informative prior distributions on the parameters representing those variables. We demonstrate this approach of casting basic theoretical assumptions in an informative prior by considering a case study that involves the generalized context model (GCM) of category learning. We capture existing theorizing about the optimal allocation of attention in an informative prior distribution to yield a model that is higher in psychological content and lower in complexity than the standard implementation. We also highlight that formalizing psychological theory within an informative prior distribution allows standard Bayesian model selection methods to be applied without concerns about the sensitivity of results to the prior. We then use Bayesian model selection to test the theoretical assumptions about optimal allocation formalized in the prior. We argue that the general approach of using psychological theory to guide the specification of informative prior distributions is widely applicable and should be routinely used in psychological modeling.
Publisher: American Psychological Association (APA)
Date: 2012
DOI: 10.1037/A0028551
Abstract: Wills and Pothos (2012) reviewed approaches to evaluating formal models of categorization, raising a series of worthwhile issues, challenges, and goals. Unfortunately, in discussing these issues and proposing solutions, Wills and Pothos (2012) did not consider Bayesian methods in any detail. This means not only that their review excludes a major body of current work in the field, but also that it does not consider the body of work that provides the best current answers to the issues raised. In this comment, we argue that Bayesian methods can be--and, in most cases, already have been--applied to all the major model evaluation issues raised by Wills and Pothos (2012). In particular, Bayesian methods can address the challenges of avoiding overfitting, considering qualitative properties of data, reducing dependence on free parameters, and testing empirical breadth.
Publisher: Springer Science and Business Media LLC
Date: 07-01-2014
Publisher: Center for Open Science
Date: 03-04-2018
Abstract: Concerns have been growing about the veracity of psychological research. Many findings in psychological science are based on studies with insufficient statistical power and nonrepresentative s les, or may otherwise be limited to specific, ungeneralizable settings or populations. Crowdsourced research, a type of large-scale collaboration in which one or more research projects are conducted across multiple lab sites, offers a pragmatic solution to these and other current methodological challenges. The Psychological Science Accelerator (PSA) is a distributed network of laboratories designed to enable and support crowdsourced research projects. These projects can focus on novel research questions, or attempt to replicate prior research, in large, erse s les. The PSA’s mission is to accelerate the accumulation of reliable and generalizable evidence in psychological science. Here, we describe the background, structure, principles, procedures, benefits, and challenges of the PSA. In contrast to other crowdsourced research networks, the PSA is ongoing (as opposed to time-limited), efficient (in terms of re-using structures and principles for different projects), decentralized, erse (in terms of participants and researchers), and inclusive (of proposals, contributions, and other relevant input from anyone inside or outside of the network). The PSA and other approaches to crowdsourced psychological science will advance our understanding of mental processes and behaviors by enabling rigorous research and systematically examining its generalizability.
Publisher: Center for Open Science
Date: 07-12-2021
Abstract: Semantic priming has been studied for nearly 50 years across various experimental manipulations and theoretical frameworks. These studies provide insight into the cognitive underpinnings of semantic representations in both healthy and clinical populations however, they have suffered from several issues including generally low s le sizes and a lack of ersity in linguistic implementations. Here, we will test the size and the variability of the semantic priming effect across ten languages by creating a large database of semantic priming values, based on an adaptive s ling procedure. Differences in response latencies between related word-pair conditions and unrelated word-pair conditions (i.e., difference score confidence interval is greater than zero) will allow quantifying evidence for semantic priming, whereas improvements in model fit with the addition of a random intercept for language will provide support for variability in semantic priming across languages.
Publisher: Wiley
Date: 12-2008
DOI: 10.1080/03640210802073697
Abstract: This article demonstrates the potential of using hierarchical Bayesian methods to relate models and data in the cognitive sciences. This is done using a worked ex le that considers an existing model of category representation, the Varying Abstraction Model (VAM), which attempts to infer the representations people use from their behavior in category learning tasks. The VAM allows for a wide variety of category representations to be inferred, but this article shows how a hierarchical Bayesian analysis can provide a unifying explanation of the representational possibilities using 2 parameters. One parameter controls the emphasis on abstraction in category representations, and the other controls the emphasis on similarity. Using 30 previously published data sets, this work shows how inferences about these parameters, and about the category representations they generate, can be used to evaluate data in terms of the ongoing exemplar versus prototype and similarity versus rules debates in the literature. Using this concrete ex le, this article emphasizes the advantages of hierarchical Bayesian models in converting model selection problems to parameter estimation problems, and providing one way of specifying theoretically based priors for competing models.
Publisher: The Royal Society
Date: 2016
DOI: 10.1098/RSOS.150547
Abstract: Openness is one of the central values of science. Open scientific practices such as sharing data, materials and analysis scripts alongside published articles have many benefits, including easier replication and extension studies, increased availability of data for theory-building and meta-analysis, and increased possibility of review and collaboration even after a paper has been published. Although modern information technology makes sharing easier than ever before, uptake of open practices had been slow. We suggest this might be in part due to a social dilemma arising from misaligned incentives and propose a specific, concrete mechanism—reviewers withholding comprehensive review—to achieve the goal of creating the expectation of open practices as a matter of scientific principle.
Publisher: SAGE Publications
Date: 10-2018
Abstract: Concerns about the veracity of psychological research have been growing. Many findings in psychological science are based on studies with insufficient statistical power and nonrepresentative s les, or may otherwise be limited to specific, ungeneralizable settings or populations. Crowdsourced research, a type of large-scale collaboration in which one or more research projects are conducted across multiple lab sites, offers a pragmatic solution to these and other current methodological challenges. The Psychological Science Accelerator (PSA) is a distributed network of laboratories designed to enable and support crowdsourced research projects. These projects can focus on novel research questions or replicate prior research in large, erse s les. The PSA’s mission is to accelerate the accumulation of reliable and generalizable evidence in psychological science. Here, we describe the background, structure, principles, procedures, benefits, and challenges of the PSA. In contrast to other crowdsourced research networks, the PSA is ongoing (as opposed to time limited), efficient (in that structures and principles are reused for different projects), decentralized, erse (in both subjects and researchers), and inclusive (of proposals, contributions, and other relevant input from anyone inside or outside the network). The PSA and other approaches to crowdsourced psychological science will advance understanding of mental processes and behaviors by enabling rigorous research and systematic examination of its generalizability.
Publisher: Cambridge University Press (CUP)
Date: 14-05-2013
DOI: 10.1017/S0140525X12003020
Abstract: Faced with probabilistic relationships between causes and effects, quantum theory assumes that deterministic causes do not exist, and that only incomplete probabilistic expressions of knowledge are possible. As in its application to physics, this fundamental epistemological stance severely limits the ability of quantum theory to provide insight and understanding in human cognition.
Publisher: American Psychological Association (APA)
Date: 06-2023
DOI: 10.1037/MET0000454
Abstract: The last 25 years have shown a steady increase in attention for the Bayes factor as a tool for hypothesis evaluation and model selection. The present review highlights the potential of the Bayes factor in psychological research. We discuss six types of applications: Bayesian evaluation of point null, interval, and informative hypotheses, Bayesian evidence synthesis, Bayesian variable selection and model averaging, and Bayesian evaluation of cognitive models. We elaborate what each application entails, give illustrative ex les, and provide an overview of key references and software with links to other applications. The article is concluded with a discussion of the opportunities and pitfalls of Bayes factor applications and a sketch of corresponding future research lines. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
Publisher: Springer Science and Business Media LLC
Date: 27-01-2020
No related grants have been discovered for wolf vanpaemel.