ORCID Profile
0000-0001-7538-0720
Current Organisation
University of California
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
In Research Link Australia (RLA), "Research Topics" refer to ANZSRC FOR and SEO codes. These topics are either sourced from ANZSRC FOR and SEO codes listed in researchers' related grants or generated by a large language model (LLM) based on their publications.
Psychology | Learning, Memory, Cognition And Language | Sensory Processes, Perception And Performance | Decision Making | Cognitive Science | Resources Engineering and Extractive Metallurgy | Knowledge Representation And Machine Learning | Neurocognitive Patterns And Neural Networks | Petroleum And Reservoir Engineering | Psychological Methodology, Design and Analysis | Atomic And Molecular Physics | Psychology and Cognitive Sciences not elsewhere classified | Petroleum Geology | Developmental Psychology and Ageing
Behavioural and cognitive sciences | Expanding Knowledge in Psychology and Cognitive Sciences | Oil and gas | Biological sciences | Global climate change adaptation measures |
Publisher: Wiley
Date: 02-11-2018
DOI: 10.1111/COGS.12561
Abstract: We apply the "wisdom of the crowd" idea to human category learning, using a simple approach that combines people's categorization decisions by taking the majority decision. We first show that the aggregated crowd category learning behavior found by this method performs well, learning categories more quickly than most or all in iduals for 28 previously collected datasets. We then extend the approach so that it does not require people to categorize every stimulus. We do this using a model-based method that predicts the categorization behavior people would produce for new stimuli, based on their behavior with observed stimuli, and uses the majority of these predicted decisions. We demonstrate and evaluate the model-based approach in two case studies. In the first, we use the general recognition theory decision-bound model of categorization (Ashby & Townsend, ) to infer each person's decision boundary for two categories of perceptual stimuli, and we use these inferences to make aggregated predictions about new stimuli. In the second, we use the generalized context model exemplar model of categorization (Nosofsky, ) to infer each person's selective attention for face stimuli, and we use these inferences to make aggregated predictions about withheld stimuli. In both case studies, we show that our method successfully predicts the category of unobserved stimuli, and we emphasize that the aggregated crowd decisions arise from psychologically interpretable processes and parameters. We conclude by discussing extensions and potential real-world applications of the approach.
Publisher: Springer Science and Business Media LLC
Date: 17-08-2022
DOI: 10.3758/S13428-021-01634-1
Abstract: The Balloon Analogue Risk Task (BART) is widely-used to measure risk propensity in theoretical, clinical, and applied research. In the task, people choose either to pump a balloon to increase its value at the risk of the balloon bursting and losing all value, or to bank the current value of the balloon. Risk propensity is most commonly measured as the average number of pumps on trials for which the balloon does not burst. Burst trials are excluded because they necessarily underestimate the number of pumps people intended to make. However, their exclusion discards relevant information about people's risk propensity. A better measure of risk propensity uses the statistical method of censoring to incorporate all of the trials. We develop a new Bayesian method, based on censoring, for measuring both risk propensity and behavioral consistency in the BART. Through applications to previous data we demonstrate how the method can be extended to consider the correlation of risk propensity with external measures, and to compare differences in risk propensity between groups. We provide implementations of all of these methods in R, MATLAB, and the GUI-based statistical software JASP.
Publisher: Wiley
Date: 07-2021
DOI: 10.1111/COGS.13011
Abstract: We study the wisdom of the crowd in three sequential decision‐making tasks: the Balloon Analogue Risk Task (BART), optimal stopping problems, and bandit problems. We consider a behavior‐based approach, using majority decisions to determine crowd behavior and show that this approach performs poorly in the BART and bandit tasks. The key problem is that the crowd becomes progressively more extreme as the decision sequence progresses, because the ersity of opinion that underlies the wisdom of the crowd is lost. We also consider model‐based approaches to each task. This involves inferring cognitive models for each in idual based on their observed behavior, and using these models to predict what each in idual would do in any possible task situation. We show that this approach performs robustly well for all three tasks and has the additional advantage of being able to generalize to new problems for which there are no behavioral data. We discuss potential applications of the model‐based approach to real‐world sequential decision problems and discuss how our approach contributes to the understanding of collective intelligence.
Publisher: Wiley
Date: 12-2008
DOI: 10.1080/03640210802414826
Abstract: This article reviews current methods for evaluating models in the cognitive sciences, including theoretically based approaches, such as Bayes factors and minimum description length measures simulation approaches, including model mimicry evaluations and practical approaches, such as validation and generalization measures. This article argues that, although often useful in specific settings, most of these approaches are limited in their ability to give a general assessment of models. This article argues that hierarchical methods, generally, and hierarchical Bayesian methods, specifically, can provide a more thorough evaluation of models in the cognitive sciences. This article presents two worked ex les of hierarchical Bayesian analyses to demonstrate how the approach addresses key questions of descriptive adequacy, parameter interference, prediction, and generalization in principled and coherent ways.
Publisher: Elsevier BV
Date: 02-2016
Publisher: Springer Berlin Heidelberg
Date: 2001
Publisher: Wiley
Date: 05-11-2013
DOI: 10.1016/J.JALZ.2012.01.016
Abstract: Identifying disease-modifying treatment effects in earlier stages of Alzheimer's disease (AD)-when changes are subtle-will require improved trial design and more sensitive analytical methods. We applied hierarchical Bayesian analysis with cognitive processing (HBCP) models to the Alzheimer's Disease Assessment Scale-Cognitive subscale (ADAS-Cog) and MCI (mild cognitive impairment) Screen word list memory task data from 14 Alzheimer's disease AD patients of the Myriad Pharmaceuticals' phase III clinical trial of Flurizan (a γ-secretase modulator) versus placebo. The original analysis of 1649 patients found no treatment group differences. HBCP analysis and the original ADAS-Cog analysis were performed on the small s le. HBCP analysis detected impaired memory storage during delayed recall, whereas the original ADAS-Cog analytical method did not. The HBCP model identified a harmful treatment effect in a small s le, which has been independently confirmed from the results of other γ-secretase inhibitor. The original analytical method applied to the ADAS-Cog data did not detect this harmful treatment effect on either the full or the small s le. These findings suggest that HBCP models can detect treatment effects more sensitively than currently used analytical methods required by the Food and Drug Administration, and they do so using small patient s les.
Publisher: Elsevier BV
Date: 04-2014
Publisher: Springer Science and Business Media LLC
Date: 08-10-2016
Publisher: Springer Science and Business Media LLC
Date: 26-06-2021
Publisher: Springer Science and Business Media LLC
Date: 28-10-2016
DOI: 10.3758/S13428-015-0662-4
Abstract: We study the effect of memory impairment on triadic comparisons of animal names in a large clinical data set. We define eight groups of subjects in terms of their delayed free recall performance, and present standard analyses of the triadic comparison and free recall data that provide little insight into the effect of memory impairment on semantic structure. We then develop and apply two new methods for analyzing the data, based on cognitive models and using Bayesian statistical inference. The first new method focuses on modeling changes in semantic representation, by inferring multidimensional scaling (MDS) representations for each group based on their triadic comparisons. These representations reveal a successive decrease in semantic cluster structure and increase in uncertainty with increasing impairment. We propose a measure of spatial organization as a means of quantifying the visually evident changes in semantic organization, and demonstrate its usefulness. The second new method focuses on modeling changes in memory access with impairment, inferring the extent to which each in idual makes triadic comparisons consistent with a common semantic representation. Although these inferences are based on just 12 comparisons per subject, we show that they vary systematically with memory impairment group. We conclude by discussing the potential for clinical application of our new models, measures, and methods.
Publisher: IEEE
Date: 2000
Publisher: Wiley
Date: 12-2008
DOI: 10.1080/03640210802073697
Abstract: This article demonstrates the potential of using hierarchical Bayesian methods to relate models and data in the cognitive sciences. This is done using a worked ex le that considers an existing model of category representation, the Varying Abstraction Model (VAM), which attempts to infer the representations people use from their behavior in category learning tasks. The VAM allows for a wide variety of category representations to be inferred, but this article shows how a hierarchical Bayesian analysis can provide a unifying explanation of the representational possibilities using 2 parameters. One parameter controls the emphasis on abstraction in category representations, and the other controls the emphasis on similarity. Using 30 previously published data sets, this work shows how inferences about these parameters, and about the category representations they generate, can be used to evaluate data in terms of the ongoing exemplar versus prototype and similarity versus rules debates in the literature. Using this concrete ex le, this article emphasizes the advantages of hierarchical Bayesian models in converting model selection problems to parameter estimation problems, and providing one way of specifying theoretically based priors for competing models.
Publisher: Springer Science and Business Media LLC
Date: 04-2010
DOI: 10.3758/PBR.17.2.270
Publisher: Elsevier BV
Date: 08-2010
Publisher: American Psychological Association (APA)
Date: 04-2022
DOI: 10.1037/DEC0000166
Publisher: University of California Press
Date: 2017
DOI: 10.1525/COLLABRA.78
Abstract: Whenever parameter estimates are uncertain or observations are contaminated by measurement error, the Pearson correlation coefficient can severely underestimate the true strength of an association. Various approaches exist for inferring the correlation in the presence of estimation uncertainty and measurement error, but none are routinely applied in psychological research. Here we focus on a Bayesian hierarchical model proposed by Behseta, Berdyyeva, Olson, and Kass (2009) that allows researchers to infer the underlying correlation between error-contaminated observations. We show that this approach may be also applied to obtain the underlying correlation between uncertain parameter estimates as well as the correlation between uncertain parameter estimates and noisy observations. We illustrate the Bayesian modeling of correlations with two empirical data sets in each data set, we first infer the posterior distribution of the underlying correlation and then compute Bayes factors to quantify the evidence that the data provide for the presence of an association.
Publisher: Emerald Publishing Limited
Date: 30-08-2019
Publisher: Springer Science and Business Media LLC
Date: 05-2008
DOI: 10.3758/BRM.40.2.450
Abstract: This article describes and demonstrates the BayesSDT MATLAB-based software package for performing Bayesian analysis with equal-variance Gaussian signal detection theory (SDT). The software uses WinBUGS to draw s les from the posterior distribution of six SDT parameters: discriminability, hit rate, false alarm rate, criterion, and two alternative measures of bias. The software either provides a simple MATLAB graphical user interface or allows a more general MATLAB function call to produce graphs of the posterior distribution for each parameter of interest for each data set, as well as to return the full set of posteriors les.
Publisher: Oxford University Press
Date: 15-04-2011
Publisher: Springer Science and Business Media LLC
Date: 13-02-2018
DOI: 10.3758/S13423-017-1238-3
Abstract: The development of cognitive models involves the creative scientific formalization of assumptions, based on theory, observation, and other relevant information. In the Bayesian approach to implementing, testing, and using cognitive models, assumptions can influence both the likelihood function of the model, usually corresponding to assumptions about psychological processes, and the prior distribution over model parameters, usually corresponding to assumptions about the psychological variables that influence those processes. The specification of the prior is unique to the Bayesian context, but often raises concerns that lead to the use of vague or non-informative priors in cognitive modeling. Sometimes the concerns stem from philosophical objections, but more often practical difficulties with how priors should be determined are the stumbling block. We survey several sources of information that can help to specify priors for cognitive models, discuss some of the methods by which this information can be formalized in a prior distribution, and identify a number of benefits of including informative priors in cognitive modeling. Our discussion is based on three illustrative cognitive models, involving memory retention, categorization, and decision making.
Publisher: IEEE
Date: 2005
Publisher: Springer Science and Business Media LLC
Date: 2002
Publisher: Elsevier BV
Date: 10-2004
Publisher: Springer Science and Business Media LLC
Date: 03-2002
DOI: 10.3758/BF03196256
Abstract: The ALCOVE model of category learning, despite its considerable success in accounting for human performance across a wide range of empirical tasks, is limited by its reliance on spatial stimulus representations. Some stimulus domains are better suited to featural representation, characterizing stimuli in terms of the presence or absence of discrete features, rather than as points in a multidimensional space. We report on empirical data measuring human categorization performance across a featural stimulus domain and show that ALCOVE is unable to capture fundamental qualitative aspects of this performance. In response, a featural version of the ALCOVE model is developed, replacing the spatial stimulus representations that are usually generated by multidimensional scaling with featural representations generated by additive clustering. We demonstrate that this featural version of ALCOVE is able to capture human performance where the spatial model failed, explaining the difference in terms of the contrasting representational assumptions made by the two approaches. Finally, we discuss ways in which the ALCOVE categorization model might be extended further to use "hybrid" representational structures combining spatial and featural components.
Publisher: Purdue University (bepress)
Date: 16-12-2009
Publisher: Center for Open Science
Date: 23-10-2017
Abstract: Human behavioral data often shows patterns of sudden change overtime. Sometimes the causes of these step changes are internal, such as learning curves that change abruptly when a learner implements a new rule. Sometimes the cause is external, such as when people's opinions about a topic change in response to a new relevant event. Detecting change points in sequences of binary data is basic statistical problem, with many existing solutions, but they seem rarely to be used in psychological modeling. We develop a simple and flexible Bayesian approach to modeling step changes in cognition, implemented as a graphical model in JAGS. The model is able to infer how many change points are justified by the data, as well as the location of the change points. The basic model is also easily extended to include latent-mixture and hierarchical structures, allowing it to be tailored to specific cognitive modeling problems. We demonstrate the adequacy of the basic model by applying it to the classic Lindisfarne Scribes problem, and the flexibility of the modeling approach through two new applications. The first involves a latent-mixture model to determine if in iduals learn categories incrementally or in discrete stages.The second involves a hierarchical model of crowd-sourced predictions about the winner of the U.S. National Football League's Most Valuable Player for the 2016{2017 season.
Publisher: Springer Science and Business Media LLC
Date: 10-2003
DOI: 10.3758/BF03196130
Abstract: The planar Euclidean version of the traveling salesperson problem requires finding the shortest tour through a two-dimensional array of points. MacGregor and Ormerod (1996) have suggested that people solve such problems by using a global-to-local perceptual organizing process based on the convex hull of the array. We review evidence for and against this idea, before considering an alternative, local-to-global perceptual process, based on the rapid automatic identification of nearest neighbors. We compare these approaches in an experiment in which the effects of number of convex hull points and number of potential intersections on solution performance are measured. Performance worsened with more points on the convex hull and with fewer potential intersections. A measure of response uncertainty was unaffected by the number of convex hull points but increased with fewer potential intersections. We discuss a possible interpretation of these results in terms of a hierarchical solution process based on linking nearest neighbor clusters.
Publisher: Center for Open Science
Date: 19-08-2019
Abstract: This paper develops a new representational model of similarity data that combines continuous dimensions with discrete features. An algorithm capable of learning these representations is described, and a Bayesian model selection approach for choosing the appropriate number of dimensions and features is developed. The approach is demonstrated on a classic data set that considers the similarities between the numbers 0 through 9.
Publisher: American Psychological Association (APA)
Date: 2012
DOI: 10.1037/A0028551
Abstract: Wills and Pothos (2012) reviewed approaches to evaluating formal models of categorization, raising a series of worthwhile issues, challenges, and goals. Unfortunately, in discussing these issues and proposing solutions, Wills and Pothos (2012) did not consider Bayesian methods in any detail. This means not only that their review excludes a major body of current work in the field, but also that it does not consider the body of work that provides the best current answers to the issues raised. In this comment, we argue that Bayesian methods can be--and, in most cases, already have been--applied to all the major model evaluation issues raised by Wills and Pothos (2012). In particular, Bayesian methods can address the challenges of avoiding overfitting, considering qualitative properties of data, reducing dependence on free parameters, and testing empirical breadth.
Publisher: Wiley
Date: 03-11-2011
DOI: 10.1002/BDM.703
Publisher: Elsevier BV
Date: 02-2011
Publisher: Elsevier BV
Date: 04-2020
Publisher: Elsevier BV
Date: 10-2011
Publisher: Elsevier BV
Date: 11-2003
Publisher: American Psychological Association (APA)
Date: 2017
DOI: 10.1037/DEC0000056
Publisher: Elsevier BV
Date: 04-2006
Publisher: Springer Science and Business Media LLC
Date: 20-02-2016
DOI: 10.3758/S13428-014-0557-9
Abstract: Hilbig and Moshagen (Psychonomic Bulletin & Review, 21, 1431-1443, 2014) recently developed a method for making inferences about the decision processes people use in multi-attribute forced choice tasks. Their paper makes a number of worthwhile theoretical and methodological contributions. Theoretically, they provide an insightful psychological motivation for a probabilistic extension of the widely-used "weighted additive" (WADD) model, and show how this model, as well as other important models like "take-the-best" (TTB), can and should be expressed in terms of meaningful priors. Methodologically, they develop an inference approach based on the Minimum Description Length (MDL) principles that balances both the goodness-of-fit and complexity of the decision models they consider. This paper aims to preserve these useful contributions, but provide a complementary Bayesian approach with some theoretical and methodological advantages. We develop a simple graphical model, implemented in JAGS, that allows for fully Bayesian inferences about which models people use to make decisions. To demonstrate the Bayesian approach, we apply it to the models and data considered by Hilbig and Moshagen (Psychonomic Bulletin & Review, 21, 1431-1443, 2014), showing how a prior predictive analysis of the models, and posterior inferences about which models people use and the parameter settings at which they use them, can contribute to our understanding of human decision making.
Publisher: Springer Science and Business Media LLC
Date: 22-01-2019
Publisher: Center for Open Science
Date: 31-08-2019
Abstract: The target article on robust modeling (Lee et al.) generated a lot of commentary. In this reply, we discuss some of the common themes in the commentaries some are simple points of agreement while others are extensions of a practical or abstract nature. We also address a small number of disagreements or confusions.
Publisher: Springer Science and Business Media LLC
Date: 07-09-2017
DOI: 10.3758/S13428-016-0798-X
Abstract: Take-the-best is a decision-making strategy that chooses between alternatives, by searching the cues representing the alternatives in order of cue validity, and choosing the alternative with the first discriminating cue. Theoretical support for take-the-best comes from the "fast and frugal" approach to modeling cognition, which assumes decision-making strategies need to be fast to cope with a competitive world, and be simple to be robust to uncertainty and environmental change. We contribute to the empirical evaluation of take-the-best in two ways. First, we generate four new environments-involving bridge lengths, hamburger prices, theme park attendances, and US university rankings-supplementing the relatively limited number of naturally cue-based environments previously considered. We find that take-the-best is as accurate as rival decision strategies that use all of the available cues. Secondly, we develop 19 new data sets characterizing the change in cities and their populations in four countries. We find that take-the-best maintains its accuracy and limited search as the environments change, even if cue validities learned in one environment are used to make decisions in another. Once again, we find that take-the-best is as accurate as rival strategies that use all of the cues. We conclude that these new evaluations support the theoretical claims of the accuracy, frugality, and robustness for take-the-best, and that the new data sets provide a valuable resource for the more general study of the relationship between effective decision-making strategies and the environments in which they operate.
Publisher: Elsevier BV
Date: 03-2010
DOI: 10.1016/J.ACTPSY.2009.11.004
Abstract: We introduce the special issue on formal models of semantic concepts. After outlining the research questions that motivated the issue, we summarize the rich set of data provided by the Leuven Natural Concepts Database, and provide an overview of the seven research articles in the special issue. Each of these articles applies a formal modeling approach to one or more parts of the database, attempting to further our understanding of how people represent and use semantic concepts.
Publisher: Springer Science and Business Media LLC
Date: 02-2008
DOI: 10.3758/PBR.15.1.1
Abstract: Bayesian statistical inference offers a principled and comprehensive approach for relating psychological models to data. This article presents Bayesian analyses of three influential psychological models: multidimensional scaling models of stimulus representation, the generalized context model of category learning, and a signal detection theory model of decision making. In each case, the model is recast as a probabilistic graphical model and is evaluated in relation to a previously considered data set. In each case, it is shown that Bayesian inference is able to provide answers to important theoretical and empirical questions easily and coherently. The generality of the Bayesian approach and its potential for the understanding of models and data in psychology are discussed.
Publisher: American Psychological Association (APA)
Date: 2011
DOI: 10.1037/A0021765
Abstract: Two-choice response times are a common type of data, and much research has been devoted to the development of process models for such data. However, the practical application of these models is notoriously complicated, and flexible methods are largely nonexistent. We combine a popular model for choice response times-the Wiener diffusion process-with techniques from psychometrics in order to construct a hierarchical diffusion model. Chief among these techniques is the application of random effects, with which we allow for unexplained variability among participants, items, or other experimental units. These techniques lead to a modeling framework that is highly flexible and easy to work with. Among the many novel models this statistical framework provides are a multilevel diffusion model, regression diffusion models, and a large family of explanatory diffusion models. We provide ex les and the necessary computer code.
Publisher: Springer Science and Business Media LLC
Date: 26-07-2011
DOI: 10.3758/S13428-011-0134-4
Abstract: Number-knower levels are a series of stages of number concept development in early childhood. A child's number-knower level is typically assessed using the give-N task. Although the task procedure has been highly refined, the standard ways of analyzing give-N data remain somewhat crude. Lee and Sarnecka (Cogn Sci 34:51-67, 2010, in press) have developed a Bayesian model of children's performance on the give-N task that allows knower level to be inferred in a more principled way. However, this model requires considerable expertise and computational effort to implement and apply to data. Here, we present an approximation to the model's inference that can be computed with Microsoft Excel. We demonstrate the accuracy of the approximation and provide instructions for its use. This makes the powerful inferential capabilities of the Bayesian model accessible to developmental researchers interested in estimating knower levels from give-N data.
Publisher: Center for Open Science
Date: 19-09-2020
Abstract: There are many ways to measure how people manage risk when they make decisions. A standard approach is to measure risk propensity using self-report questionnaires. An alternative approach is to use decision-making tasks that involve risk and uncertainty, and apply cognitive models of task behavior to infer parameters that measure people’s risk propensity. We report the results of a within-participants experiment that used three questionnaires and four decision-making tasks. The questionnaires are the Risk Propensity Scale, the Risk Taking Index, and the DomainSpecific Risk Taking Scale. The decision-making tasks are the Balloon Analogue Risk Task, the preferential choice gambling task, the optimal stopping problem, and the bandit problem. We analyze the relationships between the risk measures and cognitive parameters using Bayesian inferences about the patterns of correlation, and using a novel cognitive latent variable modeling approach. The results show that people’s risk propensity is generally consistent within different conditions for each of the decision-making tasks. There is, however, little evidence that the way people manage risk generalizes across the tasks, or that it corresponds to the questionnaire measures.
Publisher: SAGE Publications
Date: 2021
DOI: 10.1177/23328584211040083
Abstract: Student evaluations of teaching are widely used to assess instructors and courses. Using a model-based approach and Bayesian methods, we examine how the direction of the scale, labels on scales, and the number of options affect the ratings. We conduct a within-participants experiment in which respondents evaluate instructors and lectures using different scales. We find that people tend to give positive ratings, especially when using letter scales compared with number scales. Furthermore, people tend to use the end-points less often when a scale is presented in reverse. Our model-based analysis allows us to infer how the features of scales shift responses to higher or lower ratings and how they compress scale use to make end-point responses more or less likely. The model also makes predictions about equivalent ratings across scales, which we demonstrate using real-world evaluation data. Our study has implications for the design of scales and for their use in assessment.
Publisher: MIT Press - Journals
Date: 10-1998
DOI: 10.1162/089976698300017151
Abstract: The common neural network modeling practice of representing the elements of a task domain in terms of a set of features lacks justification if the features are derived through some form of ad hoc preabstraction. By examining a featural similarity model related to established multidimensional scaling techniques, a neural network is developed that generates features from similarity data and attaches weights to these features. The network performs a constrained search of a continuous solution space to determine the features and uses a previously developed regularization technique to minimize the number of features it derives. The network is demonstrated on artificial data, from which it recovers known features and weights, and on two real data sets involving the similarity of a set of geometric shapes and the abstract conceptual similarities of the 10 Arabic numerals. On the basis of these results, the relationship between the multidimensional scaling approach adopted by the network and an alternative additive clustering approach to feature extraction is discussed.
Publisher: Elsevier BV
Date: 10-2008
Publisher: Center for Open Science
Date: 14-12-2018
Abstract: Despite its many advocates, Bayesian inference is currently employed by only a minority of social and behavioural scientists. One possible barrier is a lack of consensus on how best to conduct and report such analyses. Employing Bayesian methods involves making choices about prior distributions, likelihood functions and robustness checks, as well as about how to present, visualize and interpret the results (for a glossary of the main Bayesian statistical concepts, see Box 1). Some researchers may find this wide range of choices too daunting to use Bayesian inference in their own study. This paper highlights the areas of agreement and the arguments behind disagreements, established on the back of a self-questionnaire provided and explained in detail on OSF (osf.io/6eqx5/).
Publisher: Springer Science and Business Media LLC
Date: 24-04-2019
Publisher: Springer Science and Business Media LLC
Date: 07-1999
Publisher: Informa UK Limited
Date: 10-2024
Publisher: Elsevier BV
Date: 02-2001
Publisher: Oxford University Press (OUP)
Date: 06-2012
Abstract: Despite their theoretical appeal, Bayesian methods for the assessment of poor effort and malingering are still rarely used in neuropsychological research and clinical diagnosis. In this article, we outline a novel and easy-to-use Bayesian latent group analysis of malingering whose goal is to identify participants displaying poor effort when tested. Our Bayesian approach also quantifies the confidence with which each participant is classified and estimates the base rates of malingering from the observed data. We implement our Bayesian approach and compare its utility in effort assessment to that of the classic below-chance criterion of symptom validity testing (SVT). In two experiments, we evaluate the accuracy of both a Bayesian latent group analysis and the below-chance criterion of SVT in recovering the membership of participants assigned to the malingering group. Experiment 1 uses a simulation research design, whereas Experiment 2 involves the differentiation of patients with a history of stroke from coached malingerers. In both experiments, sensitivity levels are high for the Bayesian method, but low for the below-chance criterion of SVT. Additionally, the Bayesian approach proves to be resistant to possible effects of coaching. We conclude that Bayesian latent group methods complement existing methods in making more informed choices about malingering.
Publisher: Center for Open Science
Date: 16-11-2021
Abstract: Autobiographical memory specificity (AMS) refers to the tendency to recall events that occurred at a particular time and place. We examined the hypothesis that AMS is associated with pattern separation, which is an essential component of episodic memory that may allow us to encode and retain the unique aspects of events. In Study 1 (N = 94) and Study 2 (preregistered N = 99), participants completed the Autobiographical Memory Test, which measures AMS, and the Mnemonic Similarity Task measuring pattern separation. We coded Autobiographical Memory Test responses conventionally and then further classified the categoric memory responses (i) that contained words indicating repetitions or regularity (e.g., always, often) and (ii) did not contain these words. The pattern separation ability correlated positively with specific memories and correlated negatively with categoric memories lacking those words. We propose to distinguish these two types of categoric memory and discuss the integrative model of autobiographical memory structure.
Publisher: Elsevier BV
Date: 12-2010
Publisher: Elsevier BV
Date: 02-2003
Publisher: Elsevier BV
Date: 02-2011
Publisher: The Quantitative Methods for Psychology
Date: 06-2021
Publisher: American Psychological Association (APA)
Date: 07-2005
Publisher: Springer Science and Business Media LLC
Date: 2006
DOI: 10.3758/BF03193653
Abstract: Ormerod and Chronicle (1999) reported that optimal solutions to traveling salesperson problems were judged to be aesthetically more pleasing than poorer solutions and that solutions with more convex hull nodes were rated as better figures. To test these conclusions, solution regularity and the number of potential intersections were held constant, whereas solution optimality, the number of internal nodes, and the number of nearest neighbors in each solution were varied factorially. The results did not support the view that the convex hull is an important determinant of figural attractiveness. Also, in contrast to the findings of Ormerod and Chronicle, there were consistent in idual differences. Participants appeared to be ided as to whether the most attractive figure enclosed a given area within a perimeter of minimum or maximum length. It is concluded that future research in this area cannot afford to focus exclusively on group performance measures.
Publisher: American Psychological Association (APA)
Date: 10-2019
DOI: 10.1037/DEC0000105
Publisher: Springer Science and Business Media LLC
Date: 09-10-2019
Publisher: Springer Science and Business Media LLC
Date: 26-02-2001
Abstract: Little research has been carried out on human performance in optimization problems, such as the Traveling Salesman problem (TSP). Studies by Polivanova (1974, Voprosy Psikhologii, 4, 41-51) and by MacGregor and Ormerod (1996, Perception & Psychophysics, 58, 527-539) suggest that: (1) the complexity of solutions to visually presented TSPs depends on the number of points on the convex hull and (2) the perception of optimal structure is an innate tendency of the visual system, not subject to in idual differences. Results are reported from two experiments. In the first, measures of the total length and completion speed of pathways, and a measure of path uncertainty were compared with optimal solutions produced by an elastic net algorithm and by several heuristic methods. Performance was also compared under instructions to draw the shortest or the most attractive pathway. In the second, various measures of performance were compared with scores on Raven's advanced progressive matrices (APM). The number of points on the convex hull did not determine the relative optimality of solutions, although both this factor and the total number of points influenced solution speed and path uncertainty. Subjects' solutions showed appreciable in idual differences, which had a strong correlation with APM scores. The relation between perceptual organization and the process of solving visually presented TSPs is briefly discussed, as is the potential of optimization for providing a conceptual framework for the study of intelligence.
Publisher: Elsevier BV
Date: 04-2006
Publisher: Springer Science and Business Media LLC
Date: 06-01-2017
Publisher: Elsevier BV
Date: 05-2007
Publisher: IEEE Comput. Soc
Date: 1998
Publisher: Springer Science and Business Media LLC
Date: 08-2005
DOI: 10.3758/BF03196751
Abstract: Many evaluations of cognitive models rely on data that have been averaged or aggregated across all experimental subjects, and so fail to consider the possibility of important in idual differences between subjects. Other evaluations are done at the single-subject level, and so fail to benefit from the reduction of noise that data averaging or aggregation potentially provides. To overcome these weaknesses, we have developed a general approach to modeling in idual differences using families of cognitive models in which different groups of subjects are identified as having different psychological behavior. Separate models with separate parameterizations are applied to each group of subjects, and Bayesian model selection is used to determine the appropriate number of groups. We evaluate this in idual differences approach in a simulation study and show that it is superior in terms of the key modeling goals of prediction and understanding. We also provide two practical demonstrations of the approach, one using the ALCOVE model of category learning with data from four previously analyzed category learning experiments, the other using multidimensional scaling representational models with previously analyzed similarity data for colors. In both demonstrations, meaningful in idual differences are found and the psychological models are able to account for this variation through interpretable differences in parameterization. The results highlight the potential of extending cognitive models to consider in idual differences.
Publisher: Informa UK Limited
Date: 12-1997
Publisher: Springer Science and Business Media LLC
Date: 08-07-2020
DOI: 10.1007/S42113-020-00082-Y
Abstract: Multidimensional scaling (MDS) models represent stimuli as points in a space consisting of a number of psychological dimensions, such that the distance between pairs of points corresponds to the dissimilarity between the stimuli. Two fundamental challenges in inferring MDS representations from data involve inferring the appropriate number of dimensions and the metric structure of the space used to measure distance. We approach both challenges as Bayesian model-selection problems. Treating MDS as a generative model, we define priors needed for model identifiability under metrics corresponding to psychologically separable and psychologically integral stimulus domains. We then apply a differential evolution Markov-chain Monte Carlo (DE-MCMC) method for parameter inference, and a Warp-III method for model selection. We apply these methods to five previous data sets, which collectively test the ability of the methods to infer an appropriate dimensionality and to infer whether stimuli are psychologically separable or integral. We demonstrate that our methods produce sensible results, but note a number of remaining technical challenges that need to be solved before the method can easily and generally be applied. We also note the theoretical promise of the generative modeling perspective, discussing new and extended models of MDS representation that could be developed.
Publisher: SERDI
Date: 2021
Abstract: Background: Recent Alzheimer’s disease (AD) trials have faced significant challenges to enroll pre-symptomatic or early stage AD subjects with biomarker positivity, minimal or no cognitive impairment, and likelihood to decline cognitively during a short trial period. Our previous study showed that digital cognitive biomarkers (DCB), generated by a hierarchical Bayesian cognitive process (HBCP) model, were able to distinguish groups of cognitively normal in iduals with impending cognitive decline from those without. We generated DCBs using only baseline Auditory Verbal Learning Test’s wordlist memory (WLM) item response data from the Mayo Clinic Alzheimer’s Disease Patient Registry. Objectives: To replicate our previous findings, using baseline ADAS-Cog WLM item response data from the Alzheimer’s Disease Neuroimaging Initiative, and compare DCBs to traditional approaches for scoring word-list memory tests. Design: Classified decliner subjects (n = 61) as those who developed amnestic MCI or AD dementia within 3 years of normal baseline assessment and non-decliner (n = 442) as those who did not. Measures: Evaluated the relative value of DCBs compared to traditional measures, using three analytic approaches to group differences: 1) logistic regression of summary scores per ADAS-Cog WLM task 2) Bayesian modeling of summary scores and 3) HBCP modeling to generate DCBs from item-level responses. Results: The HBCP model produced posterior distributions of group differences, of which Bayes factor assessment identified three DCBs with notable group differences: Immediate Retrieval from Durable Storage, (BFds = 11.8, strong evidence) One-Shot Learning, (BFds = 4.5, moderate evidence) and Partial Learning (BFds = 2.9, weak evidence). In contrast, logistic regression of summary scores did not significantly discriminate between groups, and the Bayes factor assessment of modeled summary scores provided moderate evidence that the groups were equivalent (BFsd = 3.4, 3.1, 2.9, and 1.4, respectively). Conclusions: This study demonstrated DCBs’ ability to distinguish , at baseline, between impending cognitive decline and non-decline groups where in iduals in both groups were classified as cognitively normal. This validated findings from our previous study, demonstrating DCBs’ advantages over traditional approaches. This study warrants further refinement of the HBCP DCBs to predict impending cognitive decline in in iduals and other factors associated with AD, such as physical biomarker load.
Publisher: SPE
Date: 26-09-2004
DOI: 10.2118/90338-MS
Abstract: The Oil and Gas industry has a poor record when it comes to accurately assessing uncertainties. An assessment of the risks that arise from these uncertainties is a major factor influencing investment decisions in the O& G industry. Thus, throughout the industry, experts’ understanding of the probabilities of uncertain events are elicited via a range of methods. However, industry sources continue to report ‘surprise’ values (outside the P(80) range) far more often than the 20% indicated by this interval. This suggests problems in the elicitation process: either experts’ beliefs do not capture the distributions they are attempting to define or the processes used in the industry are failing to elicit the experts’ subjective probability distributions (SPDs) accurately. The authors discuss the ways in which experts’ subjective beliefs about the probability of events are commonly elicited in the industry, and the biases expected to be observed therein, in light of psychological findings and theories of decision-making. Special note is made of the limitations imposed on decision-making by the nature of human short-term memory and predictable biases resulting from reliance on heuristic (rule-of-thumb) reasoning techniques. The results of an experiment comparing two commonly used elicitation techniques with one, designed by the authors, which utilizes heuristic reasoning as a strength rather than a weakness, are presented and discussed. Accuracy, both in terms of subjective and objective measures, was improved on our experimental tasks by use of the new, More-or-Less Elicitation (MOLE) technique over both of the currently used methods. We conclude that an approach to elicitation that recognizes the tendency of people to use heuristic reasoning, rather than forces them to reason in probabilistic, non-intuitive ways, may yield superior results. Finally, we discuss possible refinements to the MOLE process and further research required to elucidate the impacts of various biases on elicited distributions.
Publisher: American Psychological Association (APA)
Date: 2013
DOI: 10.1037/A0030971
Abstract: The scale-invariant memory, perception, and learning (SIMPLE) model developed by Brown, Neath, and Chater (2007) formalizes the theoretical idea that scale invariance is an important organizing principle across numerous cognitive domains and has made an influential contribution to the literature dealing with modeling human memory. In the context of free recall data, however, there is a previously unreported conceptual error in the specification of the SIMPLE model. We show that the error matters not only in theory but also in practice by reapplying the corrected SIMPLE model to the benchmark data reported by Murdock (1962). The corrected model makes different predictions about serial position curves, shows better fit to the Murdock (1962) data, and infers different parameters that require substantively different psychological interpretation.
Publisher: Springer Science and Business Media LLC
Date: 31-07-2019
DOI: 10.3758/S13428-018-1087-7
Abstract: Human behavioral data often show patterns of sudden change over time. Sometimes the causes of these step changes are internal, such as learning curves changing abruptly when a learner implements a new rule. Sometimes the cause is external, such as people's opinions about a topic changing in response to a new relevant event. Detecting change points in sequences of binary data is a basic statistical problem with many existing solutions, but these solutions rarely seem to be used in psychological modeling. We develop a simple and flexible Bayesian approach to modeling step changes in cognition, implemented as a graphical model in JAGS. The model is able to infer how many change points are justified by the data, as well as the locations of the change points. The basic model is also easily extended to include latent-mixture and hierarchical structures, allowing it to be tailored to specific cognitive modeling problems. We demonstrate the adequacy of this basic model by applying it to the classic Lindisfarne scribes problem, and the flexibility of the modeling approach is demonstrated through two new applications. The first involves a latent-mixture model to determine whether in iduals learn categories incrementally or in discrete stages. The second involves a hierarchical model of crowd-sourced predictions about the winner of the US National Football League's Most Valuable Player for the 2016-2017 season.
Publisher: Springer Science and Business Media LLC
Date: 28-01-2019
Publisher: Elsevier
Date: 2015
Publisher: Springer Science and Business Media LLC
Date: 20-01-2023
Publisher: Elsevier BV
Date: 12-2022
Publisher: Walter de Gruyter GmbH
Date: 1997
Publisher: Springer Science and Business Media LLC
Date: 08-2005
DOI: 10.1007/S10729-005-2013-Y
Abstract: There is growing concern that current health care services are not sustainable. The compartmental flow model provides the opportunity for improved decision-making about bed occupancy decisions, particularly those of a strategic nature. This modelling can be applied to complement infrastructure and workforce-planning methods. Discussion about appropriateness of the level of model complexity, the degree of fit and the ability to use compartmental flow models for generalization and forecasting has been lacking. The authors investigated model selection and assessment in relation to hospital bed compartment flow models. A compartment model for a range of scenarios was created. The training and test data related to the 1998 and 1999 calendar years, respectively. The majority of scenarios tested were based upon commonly used periods that describe periods of time. The goodness-of-fit achieved by optimisation was measured against the training and test data. Model fit improved with increasing complexity as expected. The analysis of model fit against the test data showed that increasing model complexity did result in over-fitting, and better prediction was achieved with a relatively simple model. In terms of generalisation, the seasonal models performed best. Single day census type models, which have been used by Millard and his colleagues, were also generated. The performance of these models was similar, but inferior to that of the models generated from a full year of training data. The additional data make the models better able to capture the variation across the year in activity.
Publisher: Springer Science and Business Media LLC
Date: 31-03-2017
DOI: 10.3758/S13428-017-0879-5
Abstract: People often interact with environments that can provide only a finite number of items as resources. Eventually a book contains no more chapters, there are no more albums available from a band, and every Pokémon has been caught. When interacting with these sorts of environments, people either actively choose to quit collecting new items, or they are forced to quit when the items are exhausted. Modeling the distribution of how many items people collect before they quit involves untangling these two possibilities, We propose that censored geometric models are a useful basic technique for modeling the quitting distribution, and, show how, by implementing these models in a hierarchical and latent-mixture framework through Bayesian methods, they can be extended to capture the additional features of specific situations. We demonstrate this approach by developing and testing a series of models in two case studies involving real-world data. One case study deals with people choosing jokes from a recommender system, and the other deals with people completing items in a personality survey.
Publisher: Wiley
Date: 05-12-2012
DOI: 10.1111/J.1551-6709.2011.01212.X
Abstract: Inductive generalization, where people go beyond the data provided, is a basic cognitive capability, and it underpins theoretical accounts of learning, categorization, and decision making. To complete the inductive leap needed for generalization, people must make a key ''s ling'' assumption about how the available data were generated. Previous models have considered two extreme possibilities, known as strong and weak s ling. In strong s ling, data are assumed to have been deliberately generated as positive ex les of a concept, whereas in weak s ling, data are assumed to have been generated without any restrictions. We develop a more general account of s ling that allows for an intermediate mixture of these two extremes, and we test its usefulness. In two experiments, we show that most people complete simple one-dimensional generalization tasks in a way that is consistent with their believing in some mixture of strong and weak s ling, but that there are large in idual differences in the relative emphasis different people give to each type of s ling. We also show experimentally that the relative emphasis of the mixture is influenced by the structure of the available information. We discuss the psychological meaning of mixing strong and weak s ling, and possible extensions of our modeling approach to richer problems of inductive generalization.
Publisher: American Psychological Association (APA)
Date: 10-2018
DOI: 10.1037/DEC0000081
Publisher: Springer Science and Business Media LLC
Date: 04-2004
DOI: 10.3758/BF03196581
Abstract: An evidence accumulation model of forced-choice decision making is proposed to unify the fast and frugal take the best (TTB) model and the alternative rational (RAT) model with which it is usually contrasted. The basic idea is to treat the TTB model as a sequential-s ling process that terminates as soon as any evidence in favor of a decision is found and the rational approach as a sequential-s ling process that terminates only when all available information has been assessed. The unified TTB and RAT models were tested in an experiment in which participants learned to make correct judgments for a set of real-world stimuli on the basis of feedback, and were then asked to make additional judgments without feedback for cases in which the TTB and the rational models made different predictions. The results show that, in both experiments, there was strong intraparticipant consistency in the use of either the TTB or the rational model but large interparticipant differences in which model was used. The unified model is shown to be able to capture the differences in decision making across participants in an interpretable way and is preferred by the minimum description length model selection criterion.
Publisher: Springer Science and Business Media LLC
Date: 18-12-2011
Publisher: SAGE Publications
Date: 05-2011
Abstract: Statistical inference in psychology has traditionally relied heavily on p-value significance testing. This approach to drawing conclusions from data, however, has been widely criticized, and two types of remedies have been advocated. The first proposal is to supplement p values with complementary measures of evidence, such as effect sizes. The second is to replace inference with Bayesian measures of evidence, such as the Bayes factor. The authors provide a practical comparison of p values, effect sizes, and default Bayes factors as measures of statistical evidence, using 855 recently published t tests in psychology. The comparison yields two main results. First, although p values and default Bayes factors almost always agree about what hypothesis is better supported by the data, the measures often disagree about the strength of this support for 70% of the data sets for which the p value falls between .01 and .05, the default Bayes factor indicates that the evidence is only anecdotal. Second, effect sizes can provide additional evidence to p values and default Bayes factors. The authors conclude that the Bayesian approach is comparatively prudent, preventing researchers from overestimating the evidence in favor of an effect.
Publisher: Cambridge University Press
Date: 03-04-2014
Abstract: Bayesian inference has become a standard method of analysis in many fields of science. Students and researchers in experimental psychology and cognitive science, however, have failed to take full advantage of the new and exciting possibilities that the Bayesian approach affords. Ideal for teaching and self study, this book demonstrates how to do Bayesian modeling. Short, to-the-point chapters offer ex les, exercises, and computer code (using WinBUGS or JAGS, and supported by Matlab and R), with additional support available online. No advance knowledge of statistics is required and, from the very start, readers are encouraged to apply and adjust Bayesian analyses by themselves. The book contains a series of chapters on parameter estimation and model selection, followed by detailed case studies from cognitive science. After working through this book, readers should be able to build their own Bayesian models, apply the models to their own data, and draw their own conclusions.
Publisher: Springer Science and Business Media LLC
Date: 07-08-2012
DOI: 10.3758/S13423-012-0300-4
Abstract: Formal models in psychology are used to make theoretical ideas precise and allow them to be evaluated quantitatively against data. We focus on one important--but under-used and incorrectly maligned--method for building theoretical assumptions into formal models, offered by the Bayesian statistical approach. This method involves capturing theoretical assumptions about the psychological variables in models by placing informative prior distributions on the parameters representing those variables. We demonstrate this approach of casting basic theoretical assumptions in an informative prior by considering a case study that involves the generalized context model (GCM) of category learning. We capture existing theorizing about the optimal allocation of attention in an informative prior distribution to yield a model that is higher in psychological content and lower in complexity than the standard implementation. We also highlight that formalizing psychological theory within an informative prior distribution allows standard Bayesian model selection methods to be applied without concerns about the sensitivity of results to the prior. We then use Bayesian model selection to test the theoretical assumptions about optimal allocation formalized in the prior. We argue that the general approach of using psychological theory to guide the specification of informative prior distributions is widely applicable and should be routinely used in psychological modeling.
Publisher: Springer Science and Business Media LLC
Date: 08-2010
DOI: 10.3758/BRM.42.3.884
Publisher: Frontiers Media SA
Date: 09-12-2014
Publisher: Springer Science and Business Media LLC
Date: 2002
Publisher: IEEE Comput. Soc
Date: 1998
Publisher: Wiley
Date: 06-05-2006
DOI: 10.1207/S15516709COG0000_69
Abstract: We consider human performance on an optimal stopping problem where people are presented with a list of numbers independently chosen from a uniform distribution. People are told how many numbers are in the list, and how they were chosen. People are then shown the numbers one at a time, and are instructed to choose the maximum, subject to the constraint that they must choose a number at the time it is presented, and any choice below the maximum is incorrect. We present empirical evidence that suggests people use threshold-based models to make decisions, choosing the first currently maximal number that exceeds a fixed threshold for that position in the list. We then develop a hierarchical generative account of this model family, and use Bayesian methods to learn about the parameters of the generative process, making inferences about the threshold decision models people use. We discuss the interesting aspects of human performance on the task, including the lack of learning, and the presence of large in idual differences, and consider the possibility of extending the modeling framework to account for in idual differences. We also use the modeling results to discuss the merits of hierarchical, generative and Bayesian models of cognitive processes more generally.
Publisher: Cambridge University Press (CUP)
Date: 14-05-2013
DOI: 10.1017/S0140525X12003020
Abstract: Faced with probabilistic relationships between causes and effects, quantum theory assumes that deterministic causes do not exist, and that only incomplete probabilistic expressions of knowledge are possible. As in its application to physics, this fundamental epistemological stance severely limits the ability of quantum theory to provide insight and understanding in human cognition.
Publisher: Elsevier BV
Date: 06-2011
Publisher: Springer Science and Business Media LLC
Date: 08-04-2021
Publisher: Springer Science and Business Media LLC
Date: 25-09-2023
Publisher: American Psychological Association (APA)
Date: 06-2023
DOI: 10.1037/MET0000454
Abstract: The last 25 years have shown a steady increase in attention for the Bayes factor as a tool for hypothesis evaluation and model selection. The present review highlights the potential of the Bayes factor in psychological research. We discuss six types of applications: Bayesian evaluation of point null, interval, and informative hypotheses, Bayesian evidence synthesis, Bayesian variable selection and model averaging, and Bayesian evaluation of cognitive models. We elaborate what each application entails, give illustrative ex les, and provide an overview of key references and software with links to other applications. The article is concluded with a discussion of the opportunities and pitfalls of Bayes factor applications and a sketch of corresponding future research lines. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
Publisher: Wiley
Date: 19-03-2014
DOI: 10.1111/COGS.12119
Abstract: In most decision-making situations, there is a plethora of information potentially available to people. Deciding what information to gather and what to ignore is no small feat. How do decision makers determine in what sequence to collect information and when to stop? In two experiments, we administered a version of the German cities task developed by Gigerenzer and Goldstein (1996), in which participants had to decide which of two cities had the larger population. Decision makers were not provided with the names of the cities, but they were able to collect different kinds of cues for both response alternatives (e.g., "Does this city have a university?") before making a decision. Our experiments differed in whether participants were free to determine the number of cues they examined. We demonstrate that a novel model, using hierarchical latent mixtures and Bayesian inference (Lee & Newell, ) provides a more complete description of the data from both experiments than simple conventional strategies, such as the take-the-best or the Weighted Additive heuristics.
Publisher: Purdue University (bepress)
Date: 29-06-2007
Publisher: Center for Open Science
Date: 19-08-2019
Abstract: Inductive generalization, where people go beyond the data provided, is a basic cognitive capability, and it underpins theoretical accounts of learning, categorization, and decision making. To complete the inductive leap needed for generalization, people must make a key ‘‘s ling’’ assumption about how the available data were generated. Previous models have considered two extreme possibilities, known as strong and weak s ling. In strong s ling, data are assumed to have been deliberately generated as positive ex les of a concept, whereas in weak s ling, data are assumed to have been generated without any restrictions. We develop a more general account of s ling that allows for an intermediate mixture of these two extremes, and we test its usefulness. In two experiments, we show that most people complete simple one‐dimensional generalization tasks in a way that is consistent with their believing in some mixture of strong and weak s ling, but that there are large in idual differences in the relative emphasis different people give to each type of s ling. We also show experimentally that the relative emphasis of the mixture is influenced by the structure of the available information. We discuss the psychological meaning of mixing strong and weak s ling, and possible extensions of our modeling approach to richer problems of inductive generalization.
Publisher: American Psychological Association (APA)
Date: 06-2010
DOI: 10.1037/A0017182
Abstract: The purpose of the recently proposed prep statistic is to estimate the probability of concurrence, that is, the probability that a replicate experiment yields an effect of the same sign (Killeen, 2005a). The influential journal Psychological Science endorses prep and recommends its use over that of traditional methods. Here we show that prep overestimates the probability of concurrence. This is because prep was derived under the assumption that all effect sizes in the population are equally likely a priori. In many situations, however, it is advisable also to entertain a null hypothesis of no or approximately no effect. We show how the posterior probability of the null hypothesis is sensitive to a priori considerations and to the evidence provided by the data and the higher the posterior probability of the null hypothesis, the smaller the probability of concurrence. When the null hypothesis and the alternative hypothesis are equally likely a priori, prep may overestimate the probability of concurrence by 30% and more. We conclude that prep provides an upper bound on the probability of concurrence, a bound that brings with it the danger of having researchers believe that their experimental effects are much more reliable than they actually are.
Publisher: Springer Science and Business Media LLC
Date: 04-2009
DOI: 10.3758/PBR.16.2.424
Publisher: Elsevier BV
Date: 08-2010
Publisher: MDPI AG
Date: 25-11-2020
Abstract: Depression is a debilitating disorder with high prevalence and socioeconomic cost, but the brain-physiological processes that are altered during depressive states are not well understood. Here, we build on recent findings in macaques that indicate a direct causal relationship between pupil dilation and anterior cingulate cortex mediated arousal during anticipation of reward. We translated these findings to human subjects with concomitant pupillometry/fMRI in a s le of unmedicated participants diagnosed with major depression and healthy controls. We could show that the upregulation and maintenance of arousal in anticipation of reward was disrupted in patients in a symptom-load dependent manner. We could further show that the failure to maintain reward anticipatory arousal showed state-marker properties, as it tracked the load and impact of depressive symptoms independent of prior diagnosis status. Further, group differences of anticipatory arousal and continuous correlations with symptom load were not traceable only at the level of pupillometric responses, but were mirrored also at the neural level within salience network hubs. The upregulation and maintenance of arousal during reward anticipation is a novel translational and well-traceable process that could prove a promising gateway to a physiologically informed patient stratification and targeted interventions.
Publisher: SPE
Date: 11-09-2006
DOI: 10.2118/100699-MS
Abstract: Business under-performance in the upstream oil and gas industry, and the failure of many decisions to return expected results, has led to a growing interest over the past few years in understanding the impacts of current decision-making tools and processes and their relationship with decision outcomes. Improving oil and gas decision-making is thus, increasingly, seen as reliant on an understanding of what types of decisions are involved, how they should be made in order to be optimal, and how they actually are made in the "real world". There has been significant work carried out within the discipline of cognitive psychology, observing how people actually make decisions. However, little is known as to whether these general observations apply to decision-making in the upstream oil and gas industry. Nor has there been work on how the results might be used to improve decision-making in the industry. This paper documents the development of a theoretical Oil and Gas Decision Making Taxonomy (OGDMT) that seeks to lay a "level playing field" decision space within which to judge the processes and tools of optimal decision-making as the first step in this research. The OGDMT builds on established ideas in the human decision-making literature, but is itself novel, and involves four different dimensions: level of investigation task constraint value function and the information structure of the environment. It is concluded that decision scenarios at different places in the taxonomy will likely involve different decision-making tools, data and processes for the achievement of optimal decision-making. The results of this work can be applied, for ex le, to the question of whether decisions about reserves should be made using deterministic or probabilistic tools, data and processes.
Publisher: Center for Open Science
Date: 20-10-2020
Abstract: The last 25 years have shown a steady increase in attention for the Bayes factor as a tool for hypothesis evaluation and model selection. The present review highlights the potential of the Bayes factor in psychological research. We discuss six types of applications: Bayesian evaluation of point null, interval, and informative hypotheses, Bayesian evidence synthesis, Bayesian variable selection and model averaging, and Bayesian evaluation of cognitive models. We elaborate what each application entails, give illustrative ex les, and provide an overview of key references and software with links to other applications. The paper is concluded with a discussion of the opportunities and pitfalls of Bayes factor applications and a sketch of corresponding future research lines.
Publisher: Cambridge University Press (CUP)
Date: 03-1997
DOI: 10.1017/S0140525X97460016
Abstract: Glenberg's account falls short in several respects. Besides requiring clearer explication of basic concepts, his account fails to recognize the autonomous nature of perception. His account of what is remembered, and its description, is too static. His strictures against connectionist modeling might be overcome by combining the notions of psychological space and principled learning in an embodied and situated network.
Publisher: Cambridge University Press (CUP)
Date: 08-2011
DOI: 10.1017/S0140525X11000343
Abstract: Jones & Love (J& L) should have given more attention to Agnostic uses of Bayesian methods for the statistical analysis of models and data. Reliance on the frequentist analysis of Bayesian models has retarded their development and prevented their full evaluation. The Ecumenical integration of Bayesian statistics to analyze Bayesian models offers a better way to test their inferential and predictive capabilities.
Publisher: Elsevier BV
Date: 07-2009
Publisher: American Psychological Association (APA)
Date: 2014
DOI: 10.1037/DEC0000019
Publisher: Springer Science and Business Media LLC
Date: 10-2015
DOI: 10.3758/S13428-014-0517-4
Abstract: The power fallacy refers to the misconception that what holds on average -across an ensemble of hypothetical experiments- also holds for each case in idually. According to the fallacy, high-power experiments always yield more informative data than do low-power experiments. Here we expose the fallacy with concrete ex les, demonstrating that a particular outcome from a high-power experiment can be completely uninformative, whereas a particular outcome from a low-power experiment can be highly informative. Although power is useful in planning an experiment, it is less useful-and sometimes even misleading-for making inferences from observed data. To make inferences from data, we recommend the use of likelihood ratios or Bayes factors, which are the extension of likelihood ratios beyond point hypotheses. These methods of inference do not average over hypothetical replications of an experiment, but instead condition on the data that have actually been observed. In this way, likelihood ratios and Bayes factors rationally quantify the evidence that a particular data set provides for or against the null or any other hypothesis.
Publisher: Center for Open Science
Date: 08-08-2023
Abstract: The circular drift-diffusion model (CDDM) is a sequential s ling model designed to account for decisions and response times in decision-making tasks with a circular set of choice alternatives. We present and demonstrate a fully Bayesian implementation and extension of the CDDM. This development allows researchers to apply the CDDM to data from complex experiments and draw conclusions about targeted hypotheses. The Bayesian implementation relies on a custom JAGS module. We describe the module and demonstrate its adequacy through a simulation study. We then illustrate the advantages of the implementation by revisiting data from a continuous orientation judgment task. We develop a graphical model for the analysis that is based on the CDDM, but extends it with hierarchical and latent-mixture structures.We then demonstrate how these extensions are used to accommodate the design of the experiment and to implement psychological assumptions about in idual differences, the difficulty of different stimulus conditions, and the impact of cues on decision making. Finally, we demonstrate how the computational Bayesian inference enabled by JAGS allows these assumptions to be tested and addresses psychological research questions about people's decision making.
Publisher: Springer Science and Business Media LLC
Date: 2000
DOI: 10.3758/BF03212074
Abstract: MacGregor and Ormerod (1996) have presented results purporting to show that human performance on visually presented traveling salesman problems, as indexed by a measure of response uncertainty, is strongly determined by the number of points in the stimulus array falling inside the convex hull, as distinct from the total number of points. It is argued that this conclusion is artifactually determined by their constrained procedure for stimulus construction, and, even if true, would be limited to arrays with fewer than around 50 points.
Publisher: American Psychological Association (APA)
Date: 07-2023
DOI: 10.1037/REV0000421
Publisher: American Psychological Association (APA)
Date: 11-2022
DOI: 10.1037/XLM0001105
Abstract: Much recent research and theorizing in the field of reasoning has been concerned with intuitive sensitivity to logical validity, such as the logic-brightness effect, in which logically valid arguments are judged to have a "brighter" typeface than invalid arguments. We propose and test a novel signal competition account of this phenomenon. Our account makes two assumptions: (a) as per the demands of the logic-brightness task, people attempt to find a perceptual signal to guide brightness judgments, but (b) when the perceptual signal is hard to discern, they instead attend to cues such as argument validity. Experiment 1 tested this account by manipulating the difficulty of the perceptual contrast. When contrast discrimination was relatively difficult, we replicated the logic-brightness effect. When the discrimination was easy, the effect was eliminated. Experiment 2 manipulated the ambiguity of the perceptual task, comparing discrimination performance when the perceptual contrast was labeled in terms of rating "brightness" or "darkness". When the less ambiguous darkness labeling was used, there was no evidence of a logic-brightness effect. In both experiments, in idual sensitivity to the perceptual discrimination was negatively correlated with sensitivity to argument validity. Hierarchical latent mixture modeling revealed distinct in idual strategies: responses based on perceptual cues, responses based on validity or guessing. Consistent with the signal competition account, the proportion of those responding to validity increased with perceptual discrimination difficulty or task ambiguity. The results challenge explanations of the logic-brightness effect based on parallel dual-process models of reasoning. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
Publisher: Cambridge University Press (CUP)
Date: 08-2001
DOI: 10.1017/S0140525X0149008X
Abstract: While Tenenbaum and Griffiths impressively consolidate and extend Shepard's research in the areas of stimulus representation and generalization, there is a need for complexity measures to be developed to control the flexibility of their “hypothesis space” approach to representation. It may also be possible to extend their concept learning model to consider the fundamental issue of representational adaptation. [Tenenbaum & Griffiths]
Publisher: SAGE Publications
Date: 07-2003
DOI: 10.1068/P3416
Abstract: The planar Euclidean version of the travelling salesperson problem (TSP) requires finding a tour of minimal length through a two-dimensional set of nodes. Despite the computational intractability of the TSP, people can produce rapid, near-optimal solutions to visually presented versions of such problems. To explain this, MacGregor et al (1999, Perception28 1417–1428) have suggested that people use a global-to-local process, based on a perceptual tendency to organise stimuli into convex figures. We review the evidence for this idea and propose an alternative, local-to-global hypothesis, based on the detection of least distances between the nodes in an array. We present the results of an experiment in which we examined the relationships between three objective measures and performance measures of optimality and response uncertainty in tasks requiring participants to construct a closed tour or an open path. The data are not well accounted for by a process based on the convex hull. In contrast, results are generally consistent with a locally focused process based initially on the detection of nearest-neighbour clusters. In idual differences are interpreted in terms of a hierarchical process of constructing solutions, and the findings are related to a more general analysis of the role of nearest neighbours in the perception of structure and motion.
Publisher: Springer Science and Business Media LLC
Date: 27-01-2020
Publisher: Elsevier BV
Date: 11-2007
Publisher: Wiley
Date: 12-11-2006
DOI: 10.1207/S15516709COG0000_71
Abstract: We study human decision making in a simple forced-choice task that manipulates the frequency and accuracy of available information. Empirically, we find that people make decisions consistent with the advice provided, but that their subjective confidence in their decisions shows 2 interesting properties. First, people's confidence does not depend solely on the accuracy of the advice. Rather, confidence seems to be influenced by both the frequency and accuracy of the advice. Second, people are less confident in their guessed decisions when they have to make relatively more of them. Theoretically, we develop and evaluate a type of sequential s ling process model-known as a self-regulating accumulator-that accounts for both decision making and confidence. The model captures the regularities in people's behavior with interpretable parameter values, and we show its ability to fit the data is not due to excessive model complexity. Using the model, we draw conclusions about some properties of human reasoning under uncertainty.
Publisher: Springer Science and Business Media LLC
Date: 19-10-2021
Publisher: Elsevier BV
Date: 08-2010
Publisher: Elsevier BV
Date: 06-2009
Publisher: Purdue University (bepress)
Date: 24-07-2008
Publisher: Springer Science and Business Media LLC
Date: 12-2004
DOI: 10.3758/BF03196728
Abstract: Featural representations of similarity data assume that people represent stimuli in terms of a set of discrete properties. In this article, we consider the differences in featural representations that arise from making four different assumptions about how similarity is measured. Three of these similarity models--the common features model, the distinctive features model, and Tversky's seminal contrast model-have been considered previously. The other model is new and modifies the contrast model by assuming that each in idual feature only ever acts as a common or distinctive feature. Each of the four models is tested on previously examined similarity data, relating to kinship terms, and on a new data set, relating to faces. In fitting the models, we have used the geometric complexity criterion to balance the competing demands of data-fit and model complexity. The results show that both common and distinctive features are important for stimulus representation, and we argue that the modified contrast model combines these two components in a more effective and interpretable way than Tversky's original formulation.
Publisher: Wiley
Date: 2010
Publisher: Wiley
Date: 08-2008
Publisher: Ovid Technologies (Wolters Kluwer Health)
Date: 2013
Publisher: SPE
Date: 18-10-2004
DOI: 10.2118/88511-MS
Abstract: The oil industry loses an estimated US$30 billion or about 25% of its annual upstream expenditure due to poor decision outcomes1. This provides a strong incentive to study decision improvement initiatives. While there has been a significant improvement in decision-making, it has been slow and in small steps. Perhaps a paradigm shift maybe necessary to cope with an environment that is likely to become even harsher in future. We believe industry decision experts have overworked the technical and economic factors while largely ignoring in idual expertise, which is the third key factor in improving decision-outcomes under conditions of uncertainty in the real world environment. Most failure stories can be largely ascribed to the lack of application of appropriate expertise in judgements. This paper questions the common industry practice of placing decision-making responsibility according to organizational hierarchy rather than expertise. In this first part of a continuing study, relying on two experiments, the paper illustrates two points. First, that decision-makers invariably superimpose their experience and expertise onto the results of normative tools and adjust them. This makes it imperative that they possess the requisite expertise. Secondly, we argue it is possible to develop a methodology to distinguish experts from intermediates and novices in the oil and gas industry. We assert that while years of experience help build expertise, by itself it is an unreliable indicator of the level of expertise.
Publisher: Elsevier BV
Date: 03-2010
DOI: 10.1016/J.ACTPSY.2009.07.014
Abstract: We develop a model for finding the features that represent a set of stimuli, and apply it to the Leuven Concept Database. The model combines the feature generation and similarity judgment task data, inferring whether each of the generated features is important for explaining the patterns of similarity between stimuli. Across four datasets, we show that features range from being very important to very unimportant, and that a small subset of important features is adequate to describe the similarities. We also show that the features inferred to be more important are intuitively reasonable, and present analyses showing that important features tend to focus on narrow sets of stimuli, providing information about the category structures that organize the stimuli into groups.
Publisher: Frontiers Media SA
Date: 10-01-2022
Abstract: The proposed method is a modified and improved version of the existing “Allele-specific q-PCR” (ASQ) method for genotyping of single nucleotide polymorphism (SNP) based on fluorescence resonance energy transfer (FRET). This method is similar to frequently used techniques like Amplifluor and Kompetitive allele specific PCR (KASP), as well as others employing common universal probes (UPs) for SNP analyses. In the proposed ASQ method, the fluorophores and quencher are located in separate complementary oligonucleotides. The ASQ method is based on the simultaneous presence in PCR of the following two components: an allele-specific mixture (allele-specific and common primers) and a template-independent detector mixture that contains two or more (up to four) universal probes (UP-1 to 4) and a single universal quencher oligonucleotide (Uni-Q). The SNP site is positioned preferably at a penultimate base in each allele-specific primer, which increases the reaction specificity and allele discrimination. The proposed ASQ method is advanced in providing a very clear and effective measurement of the fluorescence emitted, with very low signal background-noise, and simple procedures convenient for customized modifications and adjustments. Importantly, this ASQ method is estimated as two- to ten-fold cheaper than Amplifluor and KASP, and much cheaper than all those methods that rely on dual-labeled probes without universal components, like TaqMan and Molecular Beacons. Results for SNP genotyping in the barley genes HvSAP16 and HvSAP8 , in which stress-associated proteins are controlled, are presented as proven and validated ex les. This method is suitable for bi-allelic uniplex reactions but it can potentially be used for 3- or 4-allelic variants or different SNPs in a multiplex format in a range of applications including medical, forensic, or others involving SNP genotyping.
Publisher: California Digital Library (CDL)
Date: 31-03-2018
Abstract: We consider the recently-developed "surprisingly popular'' method for aggregating decisions across a group of people (Prelec et al. 2017). The method has shown impressive performance in a range of decision-making situations, but typically for situations in which the correct answer is already established. We consider the ability of the surprisingly popular method to make predictions, in a situation where the correct answer does not exist at the time people are asked to make decisions. Specifically, we tested its ability to predict the winners of the 256 US National Football League (NFL) games in the 2017--2018 season. Each of these predictions used participants who self-rated as "extremely knowledgeable" about the NFL, drawn from a set of 100 participants recruited through Amazon Mechanical Turk (AMT). We compare the accuracy and calibration of the surprisingly popular method to a variety of alternatives: the mode and confidence-weighted predictions of the expert AMT participants, the in idual and aggregated predictions of media experts, and a statistical Elo method based on the performance histories of the NFL teams. We find that the surprisingly popular method outperforms all of these alternatives, and has reasonable calibration properties relating the confidence of predictions to accuracy.
Publisher: Wiley
Date: 23-01-2012
DOI: 10.1111/J.1551-6709.2011.01223.X
Abstract: The "wisdom of the crowd" phenomenon refers to the finding that the aggregate of a set of proposed solutions from a group of in iduals performs better than the majority of in idual solutions. Most often, wisdom of the crowd effects have been investigated for problems that require single numerical estimates. We investigate whether the effect can also be observed for problems where the answer requires the coordination of multiple pieces of information. We focus on combinatorial problems such as the planar Euclidean traveling salesperson problem, minimum spanning tree problem, and a spanning tree memory task. We develop aggregation methods that combine common solution fragments into a global solution and demonstrate that these aggregate solutions outperform the majority of in idual solutions. These case studies suggest that the wisdom of the crowd phenomenon might be broadly applicable to problem-solving and decision-making situations that go beyond the estimation of single numbers.
Publisher: Elsevier BV
Date: 04-2004
Publisher: Elsevier BV
Date: 02-2001
Publisher: American Psychological Association (APA)
Date: 07-2015
DOI: 10.1037/DEC0000033
Publisher: Wiley
Date: 2012
DOI: 10.1111/J.1756-8765.2011.01175.X
Abstract: We apply a cognitive modeling approach to the problem of measuring expertise on rank ordering problems. In these problems, people must order a set of items in terms of a given criterion (e.g., ordering American holidays through the calendar year). Using a cognitive model of behavior on this problem that allows for in idual differences in knowledge, we are able to infer people's expertise directly from the rankings they provide. We show that our model-based measure of expertise outperforms self-report measures, taken both before and after completing the ordering of items, in terms of correlation with the actual accuracy of the answers. These results apply to six general knowledge tasks, like ordering American holidays, and two prediction tasks, involving sporting and television competitions. Based on these results, we discuss the potential and limitations of using cognitive models in assessing expertise.
Publisher: Elsevier BV
Date: 06-2022
Publisher: Elsevier BV
Date: 09-2011
Publisher: Springer Science and Business Media LLC
Date: 04-2010
DOI: 10.3758/PBR.17.2.263
Abstract: Iverson, Lee, and Wagenmakers (2009) claimed that Killeen's (2005) statistic prep overestimates the "true probability of replication." We show that Iverson et al. confused the probability of replication of an observed direction of effect with a probability of coincidence--the probability that two future experiments will return the same sign. The theoretical analysis is punctuated with a simulation of the predictions of prep for a realistic random effects world of representative parameters, when those are unknown a priori. We emphasize throughout that prep is intended to evaluate the probability of a replication outcome after observations, not to estimate a parameter. Hence, the usual conventional criteria (unbiasedness, minimum variance estimator) for judging estimators are not appropriate for probabilities such as p and prep.
Publisher: Springer Science and Business Media LLC
Date: 09-1998
DOI: 10.3758/BF03200675
Publisher: No publisher found
Date: 2022
Publisher: Public Library of Science (PLoS)
Date: 20-08-2014
Start Date: 2004
End Date: 2006
Funder: Australian Research Council
View Funded ActivityStart Date: 2002
End Date: 2003
Funder: Australian Research Council
View Funded ActivityStart Date: 2004
End Date: 2006
Funder: Australian Research Council
View Funded ActivityStart Date: 2005
End Date: 2007
Funder: Australian Research Council
View Funded ActivityStart Date: 2011
End Date: 2013
Funder: Australian Research Council
View Funded ActivityStart Date: 2011
End Date: 2013
Funder: Air Force Office of Scientific Research
View Funded ActivityStart Date: 2015
End Date: 2018
Funder: Australian Research Council
View Funded ActivityStart Date: 2019
End Date: 2021
Funder: Australian Research Council
View Funded ActivityStart Date: 2007
End Date: 2009
Funder: Air Force Office of Scientific Research
View Funded ActivityStart Date: 2013
End Date: 2014
Funder: Directorate for Social, Behavioral & Economic Sciences
View Funded ActivityStart Date: 2004
End Date: 2004
Funder: Australian Research Council
View Funded ActivityStart Date: 2008
End Date: 2014
Funder: Alzheimer's Association
View Funded ActivityStart Date: 11-2003
End Date: 12-2007
Amount: $38,700.00
Funder: Australian Research Council
View Funded ActivityStart Date: 2011
End Date: 06-2014
Amount: $219,821.00
Funder: Australian Research Council
View Funded ActivityStart Date: 2002
End Date: 12-2004
Amount: $77,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 2002
End Date: 12-2005
Amount: $171,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 2004
End Date: 12-2004
Amount: $696,005.00
Funder: Australian Research Council
View Funded ActivityStart Date: 2002
End Date: 12-2003
Amount: $45,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 07-2019
End Date: 06-2023
Amount: $440,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 2015
End Date: 04-2019
Amount: $330,500.00
Funder: Australian Research Council
View Funded Activity