ORCID Profile
0000-0001-7648-6578
Current Organisation
INRA Centre Clermont-Ferrand-Theix-Lyon
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
In Research Link Australia (RLA), "Research Topics" refer to ANZSRC FOR and SEO codes. These topics are either sourced from ANZSRC FOR and SEO codes listed in researchers' related grants or generated by a large language model (LLM) based on their publications.
Psychology | Learning, Memory, Cognition And Language | Cognitive Science | Knowledge Representation And Machine Learning | Applied Statistics | Statistical Theory | Psychological Methodology, Design And Analysis | Marketing And Market Research | Pattern Recognition and Data Mining | Knowledge Representation and Machine Learning | Decision Making
Behavioural and cognitive sciences | Expanding Knowledge in Psychology and Cognitive Sciences | Telecommunications | Expanding Knowledge in the Information and Computing Sciences |
Publisher: Center for Open Science
Date: 19-08-2019
Abstract: In this paper we consider the “size principle” for featural similarity, which states that rare features should be weighted more heavily than common features in people’s evaluations of the similarity between two entities. Specifically, it predicts that if a feature is possessed by n objects, the expected weight scales according to a 1/n law. One justification of the size principle emerges from a Bayesian analysis of simple induction problems (Tenenbaum and Griffiths, 2001a, Tenenbaum and Griffiths, 2001b), and is closely related to work by Shepard (1987) proposing universal laws for inductive generalization. In this article, we (1) show that the size principle can be more generally derived as an expression of a form of representational optimality, and (2) present analyses suggesting that across 11 different data sets in the domains of animals and artifacts, human judgments are in agreement with this law. A number of implications are discussed.
Publisher: SAGE Publications
Date: 02-03-2022
DOI: 10.1177/17456916211036654
Abstract: Psychological science is at an inflection point: The COVID-19 pandemic has exacerbated inequalities that stem from our historically closed and exclusive culture. Meanwhile, reform efforts to change the future of our science are too narrow in focus to fully succeed. In this article, we call on psychological scientists—focusing specifically on those who use quantitative methods in the United States as one context for such conversations—to begin reimagining our discipline as fundamentally open and inclusive. First, we discuss whom our discipline was designed to serve and how this history produced the inequitable reward and support systems we see today. Second, we highlight how current institutional responses to address worsening inequalities are inadequate, as well as how our disciplinary perspective may both help and hinder our ability to craft effective solutions. Third, we take a hard look in the mirror at the disconnect between what we ostensibly value as a field and what we actually practice. Fourth and finally, we lead readers through a roadmap for reimagining psychological science in whatever roles and spaces they occupy, from an informal discussion group in a department to a formal strategic planning retreat at a scientific society.
Publisher: Center for Open Science
Date: 19-08-2019
Abstract: We consider the situation in which a learner must induce the rule that explains an observed set of data but the hypothesis space of possible rules is not explicitly enumerated or identified. The first part of the article demonstrates that as long as hypotheses are sparse (i.e., index less than half of the possible entities in the domain) then a positive test strategy is near optimal. The second part of this article then demonstrates that a preference for sparse hypotheses (a sparsity bias) emerges as a natural consequence of the family resemblance principle that is, it arises from the requirement that good rules index entities that are more similar to one another than they are to entities that do not satisfy the rule
Publisher: Springer Science and Business Media LLC
Date: 27-05-2023
DOI: 10.1007/S13178-022-00737-4
Abstract: Trans children and their parents face challenges in both their private and public lives. In terms of the latter, public attitudes toward trans children and their parents can significantly impact experiences of inclusion or exclusion, including in terms of rights. Yet, to date, while a substantive body of research has focused on attitudes toward trans people in general, lacking is a focus on trans children and their parents. The study reported in this paper involved data collected in 2021 with a convenience s le of people living in Australia, who were asked to respond to a series of vignettes featuring accounts of parents of children of different gender modalities and genders, and participants were asked to rate the parents of the children in the vignettes. Participants also completed measures about traditional views of motherhood and fatherhood, a social dominance measure, a measure of values, and a measure of attitudes towards trans rights. The findings suggest mothers were rated more negatively than fathers, those with more traditional views about mothers and fathers rated all vignettes more negatively, and those with more positive attitudes toward trans rights rated all vignettes more positively. There were no differences in ratings of parents based on the gender modality of the child however, parents of non-binary children were rated most negatively. Together, the findings suggest broad support for trans children and their parents among the s le. The findings suggest that any restrictions to the rights or inclusion of trans children and their parents would likely not align with the views of people living in Australia.
Publisher: Center for Open Science
Date: 04-09-2019
Abstract: Humans have a long childhood in comparison to all other species. Across disciplines, researchers agree that humans’ prolonged immaturity is integral to our unique intelligence. The studies presented here support the hypothesis that human beings’ extended childhood pays off in the form of an ability to learn more about changing environments. Across two studies (n = 213), children and adults played a game where they chose among four different cartoon monsters yielding different numbers of star rewards. Adults focused on maximizing reward, while children chose to explore longer, even at the cost of earning fewer stars. As a result, adults won significantly more stars than children did. However, in the ‘dynamic’ version of the task, the rewards given out by the monsters changed halfway through: the monster that had been giving out the fewest stars began giving out the most. Because children continued to explore whereas adults ignored the low-reward monster, children were much more likely than adults to detect the change. This illustrates that while exploration may be costly in the short term, it leads to a more flexible understanding of the world in the long term, particularly when that world is changing.
Publisher: Center for Open Science
Date: 19-08-2019
Abstract: Similarity plays an important role in organizing the semantic system. However, given that similarity cannot be defined on purely logical grounds, it is important to understand how people perceive similarities between different entities. Despite this, the vast majority of studies focus on measuring similarity between very closely related items. When considering concepts that are very weakly related, little is known. In this article, we present 4 experiments showing that there are reliable and systematic patterns in how people evaluate the similarities between very dissimilar entities. We present a semantic network account of these similarities showing that a spreading activation mechanism defined over a word association network naturally makes correct predictions about weak similarities, whereas, though simpler, models based on direct neighbors between word pairs derived using the same network cannot.
Publisher: Center for Open Science
Date: 10-05-2021
Abstract: Both early social psychologists and the modern, interdisciplinary scientific community have advocated for erse team science. We echo this call and describe three common pitfalls of solo science illustrated by the target article. We discuss how a collaborative and inclusive approach to science can both help researchers avoid these pitfalls and pave the way for more rigorous and relevant research.
Publisher: Center for Open Science
Date: 21-01-2020
Abstract: The extent to which we generalize a novel property from a s le of familiar instances to novel instances depends on the s le composition. Previous property induction experiments have only used s les consisting of novel types (unique entities). Because real-world evidence s les often contain redundant tokens (repetitions of the same entity), we studied the effects on property induction of adding types and tokens to an observed s le. In Experiments 1-3, we presented participants with a s le of birds or flowers known to have a novel property and probed whether this property generalized to novel items varying in similarity to the initial s le. Increasing the number of novel types (e.g., new birds with the target property) in a s le produced tightening, promoting property generalization to highly similar stimuli but decreasing generalization to less similar stimuli. On the other hand, increasing the number of tokens (e.g., repeated presentations of the same bird with the target property) had little effect on generalization. Experiment 4 showed that repeated tokens are encoded and can benefit recognition, but are subsequently given little weight when inferring property generalization. We modified an existing Bayesian model of induction (Navarro, Dry & Lee, 2012) to account for both the information added by new types and the discounting of information conveyed by tokens.
Publisher: Center for Open Science
Date: 19-08-2019
Abstract: In this article, we describe the most extensive set of word associations collected to date. The database contains over 12,000 cue words for which more than 70,000 participants generated three responses in a multiple-response free association task. The goal of this study was (1) to create a semantic network that covers a large part of the human lexicon, (2) to investigate the implications of a multiple-response procedure by deriving a weighted directed network, and (3) to show how measures of centrality and relatedness derived from this network predict both lexical access in a lexical decision task and semantic relatedness in similarity judgment tasks. First, our results show that the multiple-response procedure results in a more heterogeneous set of responses, which lead to better predictions of lexical access and semantic relatedness than do single-response procedures. Second, the directed nature of the network leads to a decomposition of centrality that primarily depends on the number of incoming links or in-degree of each node, rather than its set size or number of outgoing links. Both studies indicate that adequate representation formats and sufficiently rich data derived from word associations represent a valuable type of information in both lexical and semantic processing.
Publisher: American Association for the Advancement of Science (AAAS)
Date: 12-04-2019
Abstract: Among vertebrates, zebrafish and salamanders can regenerate their hearts, whereas adult mice and humans cannot. Hirose et al. analyzed diploid cardiomyocyte frequency as a proxy for cardiac regenerative potential across 41 vertebrate species (see the Perspective by Marchianò and Murry). They observed an inverse correlation of these cells with thyroid hormone concentrations during the ectotherm-to-endotherm transition. Mice with defects in thyroid hormone signaling retained significant heart regenerative capacity, whereas zebrafish exposed to excessive thyroid hormones exhibit impaired cardiac repair. Loss of heart regenerative ability in mammals may represent a trade-off for increases in metabolism necessary for the development of endothermy. Science , this issue p. 184 see also p. 123
Publisher: Center for Open Science
Date: 19-08-2019
Abstract: Inductive generalization, where people go beyond the data provided, is a basic cognitive capability, and it underpins theoretical accounts of learning, categorization, and decision making. To complete the inductive leap needed for generalization, people must make a key ‘‘s ling’’ assumption about how the available data were generated. Previous models have considered two extreme possibilities, known as strong and weak s ling. In strong s ling, data are assumed to have been deliberately generated as positive ex les of a concept, whereas in weak s ling, data are assumed to have been generated without any restrictions. We develop a more general account of s ling that allows for an intermediate mixture of these two extremes, and we test its usefulness. In two experiments, we show that most people complete simple one‐dimensional generalization tasks in a way that is consistent with their believing in some mixture of strong and weak s ling, but that there are large in idual differences in the relative emphasis different people give to each type of s ling. We also show experimentally that the relative emphasis of the mixture is influenced by the structure of the available information. We discuss the psychological meaning of mixing strong and weak s ling, and possible extensions of our modeling approach to richer problems of inductive generalization.
Publisher: Springer Science and Business Media LLC
Date: 24-06-2016
DOI: 10.3758/S13423-015-0857-9
Abstract: The study of semi-supervised category learning has generally focused on how additional unlabeled information with given labeled information might benefit category learning. The literature is also somewhat contradictory, sometimes appearing to show a benefit to unlabeled information and sometimes not. In this paper, we frame the problem differently, focusing on when labels might be helpful to a learner who has access to lots of unlabeled information. Using an unconstrained free-sorting categorization experiment, we show that labels are useful to participants only when the category structure is ambiguous and that people's responses are driven by the specific set of labels they see. We present an extension of Anderson's Rational Model of Categorization that captures this effect.
Publisher: Center for Open Science
Date: 11-06-2018
Abstract: The curse of dimensionality, which has been widely studied in statistics and machine learning, occurs when additional features causes the size of the feature space to grow so quickly that learning classification rules becomes increasingly difficult. How do people overcome the curse of dimensionality when acquiring real-world categories that have many different features? Here we investigate the possibility that the structure of categories can help. We show that when categories follow a family resemblance structure, people are unaffected by the presence of additional features in learning. However, when categories are based on a single feature, they fall prey to the curse and having additional irrelevant features hurts performance. We compare and contrast these results to three different computational models to show that a model with limited computational capacity best captures human performance across almost all of the conditions in both experiments.
Publisher: Center for Open Science
Date: 17-10-2022
Abstract: van Doorn et al. (2021) outlined various questions that arise when conducting Bayesian model comparison for mixed effects models. Seven response articles offered their own perspective on the preferred setup for mixed model comparison, on the most appropriate specification of prior distributions, and on the desirability of default recommendations. This article presents a round-table discussion that aims to clarify outstanding issues, explore common ground, and outline practical considerations for any researcher wishing to conduct a Bayesian mixed effects model comparison.
Publisher: American Psychological Association (APA)
Date: 09-2016
DOI: 10.1037/XGE0000192
Abstract: Similarity plays an important role in organizing the semantic system. However, given that similarity cannot be defined on purely logical grounds, it is important to understand how people perceive similarities between different entities. Despite this, the vast majority of studies focus on measuring similarity between very closely related items. When considering concepts that are very weakly related, little is known. In this article, we present 4 experiments showing that there are reliable and systematic patterns in how people evaluate the similarities between very dissimilar entities. We present a semantic network account of these similarities showing that a spreading activation mechanism defined over a word association network naturally makes correct predictions about weak similarities, whereas, though simpler, models based on direct neighbors between word pairs derived using the same network cannot. (PsycINFO Database Record
Publisher: Center for Open Science
Date: 19-08-2019
Abstract: A core assumption of many theories of development is that children can learn indirectly from other people. However, indirect experience (or testimony) is not constrained to provide veridical information. As a result, if children are to capitalize on this source of knowledge, they must be able to infer who is trustworthy and who is not. How might a learner make such inferences while at the same time learning about the world? What biases, if any, might children bring to this problem? We address these questions with a computational model of epistemic trust in which learners reason about the helpfulness and knowledgeability of an informant. We show that the model captures the competencies shown by young children in four areas: (1) using informants’ accuracy to infer how much to trust them (2) using informants’ recent accuracy to overcome effects of familiarity (3) inferring trust based on consensus among informants and (4) using information about mal‐intent to decide not to trust. The model also explains developmental changes in performance between 3 and 4 years of age as a result of changing default assumptions about the helpfulness of other people.
Publisher: Center for Open Science
Date: 20-08-2019
Abstract: Understanding and measuring sentence acceptability is of fundamental importance for linguists, but although many measures for doing so have been developed, relatively little is known about some of their psychometric properties. In this paper we evaluate within- and between-participant test-retest reliability on a wide range of measures of sentence acceptability. Doing so allows us to estimate how much of the variability within each measure is due to factors including participant-level in idual differences, s le size, response styles, and item effects. The measures examined include Likert scales, two versions of forced-choice judgments, magnitude estimation, and a novel measure based on Thurstonian approaches in psychophysics. We reproduce previous findings of high between-participant reliability within and across measures, and extend these results to a generally high reliability within in idual items and in idual people. Our results indicate that Likert scales and the Thurstonian approach produce the most stable and reliable acceptability measures and do so with smaller s le sizes than the other measures. Moreover, their agreement with each other suggests that the limitation of a discrete Likert scale does not impose a significant degree of structure on the resulting acceptability judgments.
Publisher: Center for Open Science
Date: 19-08-2019
Abstract: This paper develops a new representational model of similarity data that combines continuous dimensions with discrete features. An algorithm capable of learning these representations is described, and a Bayesian model selection approach for choosing the appropriate number of dimensions and features is developed. The approach is demonstrated on a classic data set that considers the similarities between the numbers 0 through 9.
Publisher: Center for Open Science
Date: 20-08-2019
Abstract: As Bayesian methods become more popular among behavioral scientists, they will inevitably be applied in situations that violate the assumptions underpinning typical models used to guide statistical inference. With this in mind, it is important to know something about how robust Bayesian methods are to the violation of those assumptions. In this paper, we focus on the problem of contaminated data (such as data with outliers or conflicts present), with specific application to the problem of estimating a credible interval for the population mean. We evaluate five Bayesian methods for constructing a credible interval, using toy ex les to illustrate the qualitative behavior of different approaches in the presence of contaminants, and an extensive simulation study to quantify the robustness of each method. We find that the “default” normal model used in most Bayesian data analyses is not robust, and that approaches based on the Bayesian bootstrap are only robust in limited circumstances. A simple parametric model based on Tukey’s “contaminated normal model” and a model based on the t-distribution were markedly more robust. However, the contaminated normal model had the added benefit of estimating which data points were discounted as outliers and which were not.
Publisher: Center for Open Science
Date: 31-10-2019
Abstract: Proponents of preregistration argue that, among other benefits, it improves the diagnosticity of statistical tests [1]. In the strong version of this argument, preregistration does this by solving statistical problems, such as family-wise error rates. In the weak version, it nudges people to think more deeply about their theories, methods, and analyses. We argue against both: the diagnosticity of statistical tests depend entirely on how well statistical models map onto underlying theories, and so improving statistical techniques does little to improve theories when the mapping is weak. There is also little reason to expect that preregistration will spontaneously help researchers to develop better theories (and, hence, better methods and analyses).
Publisher: American Psychological Association (APA)
Date: 07-2023
DOI: 10.1037/DEC0000179
Publisher: Center for Open Science
Date: 11-01-2021
Abstract: Psychological science is at an inflection point: The COVID-19 pandemic has already begun to exacerbate inequalities that stem from our historically closed and exclusive culture. Meanwhile, reform efforts to change the future of our science are too narrow in focus to fully succeed. In this paper, we call on psychological scientists—focusing specifically on those who use quantitative methods in the United States as one context for such conversations—to begin reimagining our discipline as fundamentally open and inclusive. First, we discuss who our discipline was designed to serve and how this history produced the inequitable reward and support systems we see today. Second, we highlight how current institutional responses to address worsening inequalities are inadequate, as well as how our disciplinary perspective may both help and hinder our ability to craft effective solutions. Third, we take a hard look in the mirror at the disconnect between what we ostensibly value as a field and what we actually practice. Fourth and finally, we lead readers through a roadmap for reimagining psychological science in whatever roles and spaces they occupy, from an informal discussion group in a department to a formal strategic planning retreat at a scientific society.
Publisher: Center for Open Science
Date: 19-08-2019
Abstract: In contrast to noun categories, little is known about the graded structure of adjective categories. In this study, we investigated whether adjective categories show a similar graded structure and what determines this structure. The results show that adjective categories like nouns exhibit a reliable graded structure. Similar to nouns, we investigated whether similarity is the main determinant of the graded structure. We derived a low-dimensional similarity representation for adjective categories and found that valence differences in adjectives constitute an important organising principle in this similarity space. Valence was not implicated in the categories' graded structure, however. A formal similarity-based model using exemplars accounted for the graded structure by effectively discarding the valence differences between adjectives in the similarity representation through dimensional weighting. Our results generalise similarity-based accounts of graded structure and highlight a closely knit relationship between adjectives and nouns on a representational level.
Publisher: Center for Open Science
Date: 19-08-2019
Abstract: There is a long history of research into sequential effects, extending more than one hundred years. The pattern of sequential effects varies widely with both experimental conditions as well as for different in iduals performing the same experiment. Yet this great ersity of results is poorly understood, particularly with respect to in idual variation, which save for some passing mentions has largely gone unreported in the literature. Here we seek to understand the way in which sequential effects vary by identifying the causes underlying the differences observed in sequential effects. In order to achieve this goal we perform principal component analysis on a dataset of 158 in idual results from participants performing different experiments with the aim of identifying hidden variables responsible for sequential effects. We find a latent structure consisting of 3 components related to sequential effects—2 main and 1 minor. A relationship between the 2 main components and the separate processing of stimuli and of responses is proposed on the basis of previous empirical evidence. It is further speculated that the minor component of sequential effects arises as the consequence of processing delays. Independently of the explanation for the latent variables encountered, this work provides a unified descriptive model for a wide range of different types of sequential effects previously identified in the literature. In addition to explaining in idual differences themselves, it is demonstrated how the latent structure uncovered here is useful in understanding the classical problem of the dependence of sequential effects on the interval between successive stimuli.
Publisher: Center for Open Science
Date: 23-09-2020
Abstract: This is an archived version of a blog post on preregistration. The first half of the post argues that there is not a strong justification for preregistration as a tool to solve problems with statistical inference (p-hacking) the second half argues that preregistration has a stronger justification as one tool (among many) that can aid scientists in documenting our projects. [Note that this archival version exists only because the blog itself no longer does, and as the original has been cited multiple times there is value in ensuring that some version of the blog post remains accessible.]
Publisher: Center for Open Science
Date: 20-06-2021
Abstract: Statistical modeling is generally meant to describe patterns in data in service of the broader scientific goal of developing theories to explain those patterns. Statistical models support meaningful inferences when models are built so as to align parameters of the model with potential causal mechanisms and how they manifest in data. When statistical models are instead based on assumptions chosen by default, Attempts to draw inferences can be uninformative or even paradoxical—in essence, the tail is trying to wag the dog.These issues are illustrated by van Doorn et al. (in press) in the context of using BayesFactors to identify effects and interactions in linear mixed models. We show that the problems identified in their applications can be circumvented by using priors over inherently meaningful units instead of default priors on standardized scales. This case study illustrates how researchers must directly engage with a number of substantive issues in order to support meaningful inferences, of which we highlight two: The first is the problem of coordination, which requires a researcher to specify how the theoretical constructs postulated by a model are functionally related to observable variables. The second is the problem of generalization, which requires a researcher to consider how a model may represent theoretical constructs shared across similar but non-identical situations, along with the fact that model comparison metrics like Bayes Factors do not directly address this form of generalization. For statistical modeling to serve the goals of science, models cannot be based on default assumptions, but should instead be based on an understanding of their coordination function and on how they represent causal mechanisms that may be expected to generalize to other related scenarios.
Publisher: Center for Open Science
Date: 20-08-2019
Abstract: Forensic handwriting examiners currently testify to the origin of questioned handwriting for legal purposes. However, forensic scientists are increasingly being encouraged to assign probabilities to their observations in the form of a likelihood ratio. This study is the first to examine whether handwriting experts are able to estimate the frequency of US handwriting features more accurately than novices. The results indicate that the absolute error for experts was lower than novices, but the size of the effect is modest, and the overall error rate even for experts is large enough as to raise questions about whether their estimates can be sufficiently trustworthy for presentation in courts. When errors are separated into effects caused by miscalibration and those caused by imprecision, we find systematic differences between in iduals. Finally, we consider several ways of aggregating predictions from multiple experts, suggesting that quite substantial improvements in expert predictions are possible when a suitable aggregation method is used.
Publisher: Center for Open Science
Date: 19-08-2019
Abstract: Models of categorization make different representational assumptions, with categories being represented by prototypes, sets of exemplars, and everything in between. Rational models of categorization justify these representational assumptions in terms of different schemes for estimating probability distributions. However, they do not answer the question of which scheme should be used in representing a given category. We show that existing rational models of categorization are special cases of a statistical model called the hierarchical Dirichlet process, which can be used to automatically infer a representation of the appropriate complexity for a given category.
Publisher: Center for Open Science
Date: 12-02-2019
Abstract: Categorization and generalization are fundamentally related inference problems. Yet leading computational models of categorization (as exemplified by, e.g., Nosofsky, 1986) and generalization (as exemplified by, e.g., Tenenbaum & Griffiths, 2001) make qualitatively different predictions about how inference should change as a function of the number of items. Assuming all else is equal, categorization models predict that increasing the number of items in a category increases the chance of assigning a new item to that category generalization models predict a decrease, or category tightening with additional exemplars. This paper investigates this discrepancy, showing that people do indeed perform qualitatively differently in categorization and generalization tasks even when all superficial elements of the task are kept constant. Furthermore, the effect of category frequency on generalization is moderated by assumptions about how the items are s led. We show that neither model naturally accounts for the pattern of behavior across both categorization and generalization tasks, and discuss theoretical extensions of these frameworks to account for the importance of category frequency and s ling assumptions.
Publisher: American Psychological Association (APA)
Date: 07-2017
DOI: 10.1037/REV0000052
Abstract: Recent debates in the psychological literature have raised questions about the assumptions that underpin Bayesian models of cognition and what inferences they license about human cognition. In this paper we revisit this topic, arguing that there are 2 qualitatively different ways in which a Bayesian model could be constructed. The most common approach uses a Bayesian model as a normative standard upon which to license a claim about optimality. In the alternative approach, a descriptive Bayesian model need not correspond to any claim that the underlying cognition is optimal or rational, and is used solely as a tool for instantiating a substantive psychological theory. We present 3 case studies in which these 2 perspectives lead to different computational models and license different conclusions about human cognition. We demonstrate how the descriptive Bayesian approach can be used to answer different sorts of questions than the optimal approach, especially when combined with principled tools for model evaluation and model selection. More generally we argue for the importance of making a clear distinction between the 2 perspectives. Considerable confusion results when descriptive models and optimal models are conflated, and if Bayesians are to avoid contributing to this confusion it is important to avoid making normative claims when none are intended. (PsycINFO Database Record
Publisher: Springer Science and Business Media LLC
Date: 27-11-2018
Publisher: Center for Open Science
Date: 28-04-2020
Abstract: The exploration/exploitation trade-off (EE trade-off) describes how, when faced with several competing alternatives, decision-makers must often choose between a known good alternative (exploitation) and one or more unknown but potentially more rewarding alternatives (exploration). Prevailing theory on how humans perform the EE trade-off states that uncertainty is a major motivator for exploration: the more uncertain the environment, the more exploration that will occur. The current paper examines whether exploratory behaviour in both choice and attention may be impacted differently depending on whether uncertainty is onset suddenly (unexpected uncertainty), or more slowly (expected uncertainty). It is shown that when uncertainty was expected, participants tended to explore less with their choices, but not their attention, than when it was unexpected. Crucially, the impact of this "protection from uncertainty" on exploration only occurred when participants had an opportunity to learn the structure of the task prior to experiencing uncertainty. This suggests that the interaction between uncertainty and exploration is more nuanced than simply more uncertainty leading to more exploration, and that attention and choice behaviour may index separate aspects of the EE trade-off.
Publisher: American Psychological Association (APA)
Date: 04-2022
DOI: 10.1037/XLM0000883
Abstract: The exploration/exploitation trade-off (EE trade-off) describes how, when faced with several competing alternatives, decision-makers must often choose between a known good alternative (exploitation) and one or more unknown but potentially more rewarding alternatives (exploration). Prevailing theory on how humans perform the EE trade-off states that uncertainty is a major motivator for exploration: the more uncertain the environment, the more exploration that will occur. The current article examines whether exploratory behavior in both choice and attention may be impacted differently depending on whether uncertainty is onset suddenly (unexpected uncertainty), or more slowly (expected uncertainty). It is shown that when uncertainty was expected, participants tended to explore less with their choices, but not their attention, than when it was unexpected. Crucially, the impact of this "protection from uncertainty" on exploration only occurred when participants had an opportunity to learn the structure of the task before experiencing uncertainty. This suggests that the interaction between uncertainty and exploration is more nuanced than simply more uncertainty leading to more exploration, and that attention and choice behavior may index separate aspects of the EE trade-off. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
Publisher: Center for Open Science
Date: 19-08-2019
Abstract: Everyday reasoning requires more evidence than raw data alone can provide. We explore the idea that people can go beyond this data by reasoning about how the data was s led. This idea is investigated through an examination of premise non‐monotonicity, in which adding premises to a category‐based argument weakens rather than strengthens it. Relevance theories explain this phenomenon in terms of people's sensitivity to the relationships among premise items. We show that a Bayesian model of category‐based induction taking premise s ling assumptions and category similarity into account complements such theories and yields two important predictions: First, that sensitivity to premise relationships can be violated by inducing a weak s ling assumption and second, that premise monotonicity should be restored as a result. We test these predictions with an experiment that manipulates people's assumptions in this regard, showing that people draw qualitatively different conclusions in each case.
Publisher: Center for Open Science
Date: 19-08-2019
Abstract: Human languages vary in many ways but also show striking cross‐linguistic universals. Why do these universals exist? Recent theoretical results demonstrate that Bayesian learners transmitting language to each other through iterated learning will converge on a distribution of languages that depends only on their prior biases about language and the quantity of data transmitted at each point the structure of the world being communicated about plays no role (Griffiths & Kalish, 2005, 2007). We revisit these findings and show that when certain assumptions about the relationship between language and the world are abandoned, learners will converge to languages that depend on the structure of the world as well as their prior biases. These theoretical results are supported with a series of experiments showing that when human learners acquire language through iterated learning, the ultimate structure of those languages is shaped by the structure of the meanings to be communicated.
Publisher: Center for Open Science
Date: 20-08-2019
Abstract: Word associations have been used widely in psychology, but the validity of their application strongly depends on the number of cues included in the study and the extent to which they probe all associations known by an in idual. In this work, we address both issues by introducing a new English word association dataset. We describe the collection of word associations for over 12,000 cue words, currently the largest such English-language resource in the world. Our procedure allowed subjects to provide multiple responses for each cue, which permits us to measure weak associations. We evaluate the utility of the dataset in several different contexts, including lexical decision and semantic categorization. We also show that measures based on a mechanism of spreading activation derived from this new resource are highly predictive of direct judgments of similarity. Finally, a comparison with existing English word association sets further highlights systematic improvements provided through these new norms.
Publisher: Wiley
Date: 09-2020
DOI: 10.1111/COGS.12895
Abstract: The extent to which we generalize a novel property from a s le of familiar instances to novel instances depends on the s le composition. Previous property induction experiments have only used s les consisting of novel types (unique entities). Because real‐world evidence s les often contain redundant tokens (repetitions of the same entity), we studied the effects on property induction of adding types and tokens to an observed s le. In Experiments 1–3, we presented participants with a s le of birds or flowers known to have a novel property and probed whether this property generalized to novel items varying in similarity to the initial s le. Increasing the number of novel types (e.g., new birds with the target property) in a s le produced tightening, promoting property generalization to highly similar stimuli but decreasing generalization to less similar stimuli. On the other hand, increasing the number of tokens (e.g., repeated presentations of the same bird with the target property) had little effect on generalization. Experiment 4 showed that repeated tokens are encoded and can benefit recognition, but appear to be given little weight when inferring property generalization. We modified an existing Bayesian model of induction (Navarro, Dry, & Lee, 2012) to account for both the information added by new types and the discounting of information conveyed by tokens.
Publisher: Center for Open Science
Date: 19-08-2019
Abstract: A robust finding in category-based induction tasks is for positive observations to raise the willingness to generalize to other categories while negative observations lower the willingness to generalize. This pattern is referred to as monotonic generalization. Across three experiments we find systematic non-monotonicity effects, in which negative observations raise the willingness to generalize. Experiments 1 and 2 show that this effect emerges in hierarchically structured domains when a negative observation from a different category is added to a positive observation. They also demonstrate that this is related to a specific kind of shift in the reasoner’s hypothesis space. Experiment 3 shows that the effect depends on the assumptions that the reasoner makes about how inductive arguments are constructed. Non-monotonic reasoning occurs when people believe the facts were put together by a helpful communicator, but monotonicity is restored when they believe the observations were s led randomly from the environment.
Publisher: Center for Open Science
Date: 20-08-2019
Abstract: When generalizing properties from known to novel instances, both positive evidence (instances known to possess a property) and negative evidence (instances known not to possess a property) must be integrated. The current study compared generalization based on positive evidence alone against a mixture of positive evidence and perceptually dissimilar negative evidence in an interdimensional discrimination procedure. In 2 experiments, we compared generalization following training with a single positive stimulus (that predicted shock) against groups where an additional negative stimulus (that did not predict shock) was presented in a causal judgment (Experiment 1) and a fear conditioning (Experiment 2) procedure. In contrast to animal conditioning studies, we found that adding a “distant” negative stimulus resulted in an overall increase in generalization to stimuli varying on the dimension of the positive stimulus, consistent with the inductive reasoning literature. We show that this key qualitative result can be simulated by a Bayesian model that incorporates helpful s ling assumptions. Our results suggest that similar processes underlie generalization in inductive reasoning and associative learning tasks.
Publisher: Center for Open Science
Date: 20-08-2019
Abstract: A key phenomenon in inductive reasoning is the ersity effect, whereby a novel property is more likely to be generalized when it is shared by an evidence s le composed of erse instances than a s le composed of similar instances. We outline a Bayesian model and an experimental study that show that the ersity effect depends on the assumption that s les of evidence were selected by a helpful agent (strong s ling). Inductive arguments with premises containing either erse or non erse evidence s les were presented under different s ling conditions, where instructions and filler items indicated that the s les were selected intentionally (strong s ling) or randomly (weak s ling). A robust ersity effect was found under strong s ling, but was attenuated under weak s ling. As predicted by our Bayesian model, the largest effect of s ling was on arguments with non erse evidence, where strong s ling led to more restricted generalization than weak s ling. These results show that the characteristics of evidence that are deemed relevant to an inductive reasoning problem depend on beliefs about how the evidence was generated.
Publisher: Center for Open Science
Date: 19-08-2019
Abstract: It is well known that people attempting to perform hypothesis testing show a positive test bias, preferring to request evidence that is consistent (rather than inconsistent) with their current hypothesis. Rather than viewing this as an irrational bias, information theoretic accounts of hypothesis testing have argued that selecting tests likely to produce positive evidence is adaptive when most hypotheses are small (i.e., true of few entities in the world) and respond positively to very few queries. These accounts make the prediction that as hypotheses get larger, the relative utility of positive evidence will decrease when hypotheses are large enough, negative evidence will become more useful than positive evidence. We test if people are sensitive to this change in utility with an experiment inspired by the game “Battleship,” in which people attempt to discover the correct arrangement of ships by asking for positive or negative evidence. As predicted, as hypotheses become larger people request less positive evidence, and when hypotheses are large requests for negative evidence are more likely than requests for positive evidence. Implications for the nature of the positive test bias are discussed.
Publisher: Center for Open Science
Date: 02-10-2019
Abstract: Efficient communication leaves gaps between message and meaning. Interlocutors, by reasoning about how each other reasons, can help to fill these gaps. To the extent that such meta-inference is not calibrated, communication is impaired, raising the possibility of manipulation for deceptive ends. We examined how people reason when acting as the perpetrator or target of deception across two related experiments. Importantly, the nature of the task precluded outright lying. Thus, deception required withholding information or providing data that was factually correct but nonetheless misleading. We find evidence for two distinct patterns of behaviour. One group of people appear to make assumptions about communicative intent based on context and message content. Senders in this group were more likely to mislead, and receivers were more effectively misled. A second group of people appeared to adopt a more defensive stance, displaying the same cautious approach in all situations. We explain this behaviour using a computational account of the kinds of inferences required by both receiver and sender. These distinct patterns arise from different assumptions about the generative process behind communication.
Publisher: Center for Open Science
Date: 19-08-2019
Abstract: The study of semi-supervised category learning has generally focused on how additional unlabeled information with given labeled information might benefit category learning. The literature is also somewhat contradictory, sometimes appearing to show a benefit to unlabeled information and sometimes not. In this paper, we frame the problem differently, focusing on when labels might be helpful to a learner who has access to lots of unlabeled information. Using an unconstrained free-sorting categorization experiment, we show that labels are useful to participants only when the category structure is ambiguous and that people’s responses are driven by the specific set of labels they see. We present an extension of Anderson’s Rational Model of Categorization that captures this effect.
Publisher: Center for Open Science
Date: 28-11-2018
Abstract: One of the main limitations in natural language-based approaches to meaning is that they are not grounded. In this study, we evaluate how well different kinds of models account for people’s representations of both concrete and abstract concepts. The models are both unimodal (language-based only) models and multimodal distributional semantic models (which additionallyincorporate perceptual and/or affective information). The language-based models include both external (based on text corpora) and internal (derived from word associations) language. We present two new studies and a re-analysis of a series of previous studies demonstrating that the unimodal performance is substantially higher for internal models, especially when comparisons at the basiclevel are considered. For multimodal models, our findings suggest that additional visual and affective features lead to only slightly more accurate mental representations of word meaning than what is already encoded in internal language models however, for abstract concepts, visual andaffective features improve the predictions of external text-based models. Our work presents new evidence that the grounding problem includes abstract words as well and is therefore more widespread than previously suggested. Implications for both embodied and distributional views arediscussed.
Publisher: American Psychological Association (APA)
Date: 2016
DOI: 10.1037/XGE0000106
Abstract: There is a long history of research into sequential effects, extending more than one hundred years. The pattern of sequential effects varies widely with both experimental conditions as well as for different in iduals performing the same experiment. Yet this great ersity of results is poorly understood, particularly with respect to in idual variation, which save for some passing mentions has largely gone unreported in the literature. Here we seek to understand the way in which sequential effects vary by identifying the causes underlying the differences observed in sequential effects. In order to achieve this goal we perform principal component analysis on a dataset of 158 in idual results from participants performing different experiments with the aim of identifying hidden variables responsible for sequential effects. We find a latent structure consisting of 3 components related to sequential effects-2 main and 1 minor. A relationship between the 2 main components and the separate processing of stimuli and of responses is proposed on the basis of previous empirical evidence. It is further speculated that the minor component of sequential effects arises as the consequence of processing delays. Independently of the explanation for the latent variables encountered, this work provides a unified descriptive model for a wide range of different types of sequential effects previously identified in the literature. In addition to explaining in idual differences themselves, it is demonstrated how the latent structure uncovered here is useful in understanding the classical problem of the dependence of sequential effects on the interval between successive stimuli.
Publisher: Center for Open Science
Date: 25-06-2018
Abstract: How does the process of information transmission affect the cultural or linguistic products that emerge? This question is often studied experimentally and computationally via iterated learning: a procedure in which participants learn from previous participants in a chain. Iterated learning is a powerful tool because, when all participants share the same priors, the stationary distributions of the iterated learning chains reveal those priors. In many situations, however, it is unreasonable to assume that all participants share the same prior beliefs. We present four simulation studies and one experiment demonstrating that when the population of learners is heterogeneous, the behavior of an iterated learning chain can be unpredictable, and is often systematically distorted by the learners with the most extreme biases. This results in group-level outcomes that reflect neither the behavior of any in iduals within the population nor the overall population average. We discuss implications for the use of iterated learning as a methodological tool as well as for the processes that might have shaped cultural and linguistic evolution in the real world.
Publisher: Wiley
Date: 2021
DOI: 10.1111/COGS.12922
Publisher: Elsevier BV
Date: 2017
DOI: 10.1016/J.SCIJUS.2016.10.004
Abstract: The assignment of personal probabilities to form a forensic practitioner's likelihood ratio is a mental operation subject to all the frailties of human memory, perception and judgment. While we agree that beliefs expressed as coherent probabilities are neither 'right' nor 'wrong' we argue that debate over this fact obscures both the requirement for and consideration of the 'helpfulness' of practitioner's opinions. We also question the extent to which a likelihood ratio based on personal probabilities can realistically be expected to 'encapsulate all uncertainty'. Courts cannot rigorously assess a forensic practitioner's bare assertions of belief regarding evidential strength. At a minimum, information regarding the uncertainty both within and between the opinions of practitioners is required.
Publisher: Center for Open Science
Date: 03-02-2021
Abstract: Proposed psychological mechanisms generating non-instrumental information seeking in humans can be broadly categorised into two competing accounts: the maximisation of anticipating rewards versus an aversion to uncertainty. We compare three separate formalisations of these theories on their ability to track the dependency of information seeking behaviour on increasing levels of cue-outcome delay as well as their sensitivity to outcome valence. Across three experiments using a variety of different stimuli, we observe a flat to monotonically increasing pattern of delay dependency and minimal evidence of sensitivity to outcome valence––patterns which are better predicted, qualitatively and quantitatively, by an uncertainty aversion information model.
Publisher: Center for Open Science
Date: 20-08-2019
Abstract: Recent debates in the psychological literature have raised questions about the assumptions that underpin Bayesian models of cognition and what inferences they license about human cognition. In this paper we revisit this topic, arguing that there are 2 qualitatively different ways in which a Bayesian model could be constructed. The most common approach uses a Bayesian model as a normative standard upon which to license a claim about optimality. In the alternative approach, a descriptive Bayesian model need not correspond to any claim that the underlying cognition is optimal or rational, and is used solely as a tool for instantiating a substantive psychological theory. We present 3 case studies in which these 2 perspectives lead to different computational models and license different conclusions about human cognition. We demonstrate how the descriptive Bayesian approach can be used to answer different sorts of questions than the optimal approach, especially when combined with principled tools for model evaluation and model selection. More generally we argue for the importance of making a clear distinction between the 2 perspectives. Considerable confusion results when descriptive models and optimal models are conflated, and if Bayesians are to avoid contributing to this confusion it is important to avoid making normative claims when none are intended.
Publisher: SAGE Publications
Date: 16-02-2021
Abstract: It is commonplace, when discussing the subject of psychological theory, to write articles from the assumption that psychology differs from the physical sciences in that we have no theories that would support cumulative, incremental science. In this brief article I discuss one counterex le: Shepard’s law of generalization and the various Bayesian extensions that it inspired over the past 3 decades. Using Shepard’s law as a running ex le, I argue that psychological theory building is not a statistical problem, mathematical formalism is beneficial to theory, measurement and theory have a complex relationship, rewriting old theory can yield new insights, and theory growth can drive empirical work. Although I generally suggest that the tools of mathematical psychology are valuable to psychological theorists, I also comment on some limitations to this approach.
Start Date: 09-2004
End Date: 09-2007
Amount: $229,568.00
Funder: Australian Research Council
View Funded ActivityStart Date: 09-2005
End Date: 08-2007
Amount: $150,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 2012
End Date: 07-2017
Amount: $583,403.00
Funder: Australian Research Council
View Funded ActivityStart Date: 2015
End Date: 04-2019
Amount: $330,500.00
Funder: Australian Research Council
View Funded ActivityStart Date: 2007
End Date: 12-2011
Amount: $510,000.00
Funder: Australian Research Council
View Funded Activity