ORCID Profile
0000-0002-2113-8020
Current Organisations
National Institute of Clean and Low-Carbon Energy
,
Ludwig-Maximilians-Universität München
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
In Research Link Australia (RLA), "Research Topics" refer to ANZSRC FOR and SEO codes. These topics are either sourced from ANZSRC FOR and SEO codes listed in researchers' related grants or generated by a large language model (LLM) based on their publications.
Cognitive Science | Decision Making | Computer Perception, Memory and Attention | Psychological Methodology, Design and Analysis | Psychology and Cognitive Sciences not elsewhere classified
Publisher: Elsevier BV
Date: 03-2016
DOI: 10.1016/J.COGPSYCH.2016.01.002
Abstract: Whether the capacity of visual working memory is better characterized by an item-based or a resource-based account continues to be keenly debated. Here, we propose that visual working memory is a flexible resource that is sometimes deployed in a slot-like manner. We develop a computational model that can either encode all items in a memory set, or encode only a subset of those items. A fixed-capacity mnemonic resource is ided among the items in memory. When fewer items are encoded, they are each remembered with higher fidelity, but at the cost of having to rely on an explicit guessing process when probed about an item that is not in memory. We use the new model to test the prediction that participants will more often encode the entire set of items when the demands on memory are predictable.
Publisher: Springer Science and Business Media LLC
Date: 12-2010
DOI: 10.3758/PBR.17.6.763
Publisher: Informa UK Limited
Date: 04-02-2015
Publisher: Springer Science and Business Media LLC
Date: 15-02-2018
Publisher: Springer Science and Business Media LLC
Date: 29-07-2019
Publisher: American Psychological Association (APA)
Date: 04-2008
Publisher: American Psychological Association (APA)
Date: 2014
DOI: 10.1037/A0036801
Abstract: Decision-makers effortlessly balance the need for urgency against the need for caution. Theoretical and neurophysiological accounts have explained this tradeoff solely in terms of the quantity of evidence required to trigger a decision (the "threshold"). This explanation has also been used as a benchmark test for evaluating new models of decision making, but the explanation itself has not been carefully tested against data. We rigorously test the assumption that emphasizing decision speed versus decision accuracy selectively influences only decision thresholds. In data from a new brightness discrimination experiment we found that emphasizing decision speed over decision accuracy not only decreases the amount of evidence required for a decision but also decreases the quality of information being accumulated during the decision process. This result was consistent for 2 leading decision-making models and in a model-free test. We also found the same model-based results in archival data from a lexical decision task (reported by Wagenmakers, Ratcliff, Gomez, & McKoon, 2008) and new data from a recognition memory task. We discuss discuss implications for theoretical development and applications.
Publisher: Cambridge University Press (CUP)
Date: 14-05-2013
DOI: 10.1017/S0140525X12003068
Abstract: We focus on two issues: (1) an unusual, counterintuitive prediction that quantum probability (QP) theory appears to make regarding multiple sequential judgments, and (2) the extent to which QP is an appropriate and comprehensive benchmark for assessing judgment. These issues highlight how QP theory can fall prey to the same problems of arbitrariness that Pothos & Busemeyer (P& B) discuss as plaguing other models.
Publisher: Springer Science and Business Media LLC
Date: 27-05-2020
DOI: 10.1038/S42004-020-0311-4
Abstract: Supported Mn 2 O 3 is useful in achieving high dinitrogen selectivity at low temperature during ammonia-selective catalytic reduction (SCR). However, its controlled synthesis is challenging when the supporting material is the conventional pure silicon SBA-15 mesoporous molecular sieve. Here we show that silicon and aluminium in fly ash, the solid waste produced by coal-fired power plants, can be used to synthesize an Al-SBA-15 mesoporous molecular sieve support, which can guide the growth of Mn 2 O 3 in the as-synthesized Fe-Mn/Al-SBA-15 NH 3 -SCR catalyst. Its superior catalytic performance is demonstrated by the high NO x conversion (≥90%) and selectivity (≥86%) at low temperatures (150–300 °C). The combined theoretical and experimental results reveal that the introduction of Al induces the growth of Mn 2 O 3 catalysts. Our findings, therefore, provide a strategy for the rational design of low-temperature NH 3 -SCR catalysts through dopant-induced component engineering of composite materials.
Publisher: Springer Science and Business Media LLC
Date: 13-08-2008
DOI: 10.1007/S00426-008-0158-2
Abstract: Identification accuracy for sets of perceptually discriminable stimuli ordered on a single dimension (e.g., line length) is remarkably low, indicating a fundamental limit on information processing capacity. This surprising limit has naturally led to a focus on measuring and modeling choice probability in absolute identification research. We show that choice response time (RT) results can enrich our understanding of absolute identification by investigating dissociation between RT and accuracy as a function of stimulus spacing. The dissociation is predicted by the SAMBA model of absolute identification (Brown, Marley, Dockin, & Heathcote, 2008), but cannot easily be accommodated by other theories. We show that SAMBA provides an accurate, parameter free, account of the dissociation that emerges from the architecture of the model and the physical attributes of the stimuli, rather than through numerical adjustment. This violation of the pervasive monotonic relationship between RT and accuracy has implications for model development, which are discussed.
Publisher: American Psychological Association (APA)
Date: 10-2016
DOI: 10.1037/XLM0000268
Abstract: We report an experiment designed to provide a qualitative contrast between knowledge-limited versions of mixed-state and variable-resources (VR) models of visual change detection. The key data pattern is that observers often respond “same” on big-change trials, while simultaneously being able to discriminate between same and small-change trials. The mixed-state model provides a natural account of this data pattern: With some probability, the observer is in a zero-memory state and is forced to guess. Thus, even on big-change trials, there is a significant probability that the observer will respond “same.” On other trials, the observer retains memory for the probed study item, and these memory-based responses allow the observer to show above-chance discrimination between same and small-change trials. By contrast, we show that important versions of the VR models that we refer to as
Publisher: American Psychological Association (APA)
Date: 03-2023
DOI: 10.1037/REV0000355
Abstract: Referring to probabilistic concepts (such as randomness, s ling, and probability distributions among others) is commonplace in contemporary explanations of how people learn and make decisions in the face of environmental unknowns. Here, we critically evaluate this practice and argue that such concepts should only play a relatively minor part in psychological explanations. To make this point, we provide a theoretical analysis of what people need to do in order to deal with unknown aspects of a typical decision-making task (a repeated-choice gamble). This analysis reveals that the use of probabilistic concepts in psychological explanations may and often does conceal essential, nonprobabilistic steps that people need to take to attempt to solve the problems that environmental unknowns present. To give these steps a central role, we recast how people solve these problems as a type of hypothesis generation and evaluation, of which using probabilistic concepts to deal with unknowns is one of many possibilities. We also demonstrate some immediate practical consequences of our proposed approach in two experiments. This perspective implies a shift in focus toward nonprobabilistic aspects of psychological explanations. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
Publisher: Elsevier BV
Date: 02-2016
DOI: 10.1016/J.COGPSYCH.2015.11.001
Abstract: Response-time (RT) and choice-probability data were obtained in a rapid visual sequential-presentation change-detection task in which memory set size, study-test lag, and objective change probabilities were manipulated. False "change" judgments increased dramatically with increasing lag, consistent with the idea that study items with long lags were ejected from a discrete-slots buffer. Error RTs were nearly invariant with set size and lag, consistent with the idea that the errors were produced by a stimulus-independent guessing process. The patterns of error and RT data could not be explained in terms of encoding limitations, but were consistent with the hypothesis that long retention lags produced a zero-stimulus-information state that required guessing. Formal modeling of the change-detection RT and error data pointed toward a hybrid model of visual working memory. The hybrid model assumed mixed states involving a combination of memory and guessing, but with higher memory resolution for items with shorter retention lags. The work raises new questions concerning the nature of the memory representations that are produced across the closely related tasks of change detection and visual memory search.
Publisher: Springer Science and Business Media LLC
Date: 12-10-2013
DOI: 10.3758/S13414-013-0561-7
Abstract: A fundamental issue concerning visual working memory is whether its capacity limits are better characterized in terms of a limited number of discrete slots (DSs) or a limited amount of a shared continuous resource. Rouder et al. (2008) found that a mixed-attention, fixed-capacity, DS model provided the best explanation of behavior in a change detection task, outperforming alternative continuous signal detection theory (SDT) models. Here, we extend their analysis in two ways: first, with experiments aimed at better distinguishing between the predictions of the DS and SDT models, and second, using a model-based analysis technique called landscaping, in which the functional-form complexity of the models is taken into account. We find that the balance of evidence supports a DS account of behavior in change detection tasks but that the SDT model is best when the visual displays always consist of the same number of items. In our General Discussion section, we outline, but ultimately reject, a number of potential explanations for the observed pattern of results. We finish by describing future research that is needed to pinpoint the basis for this observed pattern of results.
Publisher: American Psychological Association (APA)
Date: 09-2018
DOI: 10.1037/BUL0000165
Abstract: We respond to the comments of Logie and Vandierendonck to our article proposing benchmark findings for evaluating theories and models of short-term and working memory. The response focuses on the two main points of criticism: (a) Logie and Vandierendonck argue that the scope of the set of benchmarks is too narrow. We explain why findings on how working memory is used in complex cognition, findings on executive functions, and findings from neuropsychological case studies are currently not included in the benchmarks, and why findings with visual and spatial materials are less prevalent among them. (b) The critics question the usefulness of the benchmarks and their ratings for advancing theory development. We explain why selecting and rating benchmarks is important and justifiable, and acknowledge that the present selection and rating decisions are in need of continuous updating. The usefulness of the benchmarks of all ratings is also enhanced by our concomitant online posting of data for many of these benchmarks. (PsycINFO Database Record
Publisher: Informa UK Limited
Date: 08-05-2019
DOI: 10.1080/01443615.2019.1575341
Abstract: Medical informed consent is the process by which a 'competent', non-coerced in idual receives sufficient information including risks of a medical procedure and gives permission for it to occur. The capacity to give an informed consent might be impaired during labour. This study aimed to examine women's abilities to understand and remember during labour. Women were prospectively recruited at 36 weeks of gestation and randomised to undertake questionnaires which assessed their ability to understand and remember information. They were randomised to: (1) information given in labour only, written format (2) information in labour, verbal (3) information at 36 weeks plus labour, written (4) information at 36 weeks plus labour, verbal. Immediate comprehension and retention was assessed at 36 weeks, in labour, and 24-72 hours after birth. Forty-nine women completed the questionnaires regarding understanding and retention of information at 36 weeks, six intrapartum, and five postpartum (90% attrition). Women receiving information at 36 weeks and in labour
Publisher: Elsevier BV
Date: 03-2018
DOI: 10.1016/J.COGNITION.2017.11.002
Abstract: When people consider a series of random binary events, such as tossing an unbiased coin and recording the sequence of heads (H) and tails (T), they tend to erroneously rate sequences with less internal structure or order (such as HTTHT) as more probable than sequences containing more structure or order (such as HHHHH). This is traditionally explained as a local representativeness effect: Participants assume that the properties of long sequences of random outcomes-such as an equal proportion of heads and tails, and little internal structure-should also apply to short sequences. However, recent theoretical work has noted that the probability of a particular sequence of say, heads and tails of length n, occurring within a larger (>n) sequence of coin flips actually differs by sequence, so P(HHHHH) <P(HTTHT). In this alternative account, people apply rational norms based on limited experience. We test these accounts. Participants in Experiment 1 rated the likelihood of occurrence for all possible strings of 4, 5, and 6 observations in a sequence of coin flips. Judgments were better explained by representativeness in alternation rate, relative proportion of heads and tails, and sequence complexity, than by objective probabilities. Experiments 2 and 3 gave similar results using incentivized binary choice procedures. Overall the evidence suggests that participants are not sensitive to variation in objective probabilities of a sub-sequence occurring they appear to use heuristics based on several distinct forms of representativeness.
Publisher: Springer Science and Business Media LLC
Date: 26-04-2016
DOI: 10.3758/S13421-016-0618-7
Abstract: Previous research with the ratio-bias task found larger response latencies for conflict trials where the heuristic- and analytic-based responses are assumed to be in opposition (e.g., choosing between 1/10 and 9/100 ratios of success) when compared to no-conflict trials where both processes converge on the same response (e.g., choosing between 1/10 and 11/100). This pattern is consistent with parallel dual-process models, which assume that there is effective, rather than lax, monitoring of the output of heuristic processing. It is, however, unclear why conflict resolution sometimes fails. Ratio-biased choices may increase because of a decline in analytical reasoning (leaving heuristic-based responses unopposed) or to a rise in heuristic processing (making it more difficult for analytic processes to override the heuristic preferences). Using the process-dissociation procedure, we found that instructions to respond logically and response speed affected analytic (controlled) processing (C), leaving heuristic processing (H) unchanged, whereas the intuitive preference for large nominators (as assessed by responses to equal ratio trials) affected H but not C. These findings create new challenges to the debate between dual-process and single-process accounts, which are discussed.
Publisher: Springer Science and Business Media LLC
Date: 09-06-2017
DOI: 10.3758/S13423-017-1330-8
Abstract: There is growing interest in modelling how people make choices that involve both risks and delays, i.e., risky inter-temporal choices. We investigated an untested assumption underlying several proposed risky inter-temporal choice models: that pure risky choices and pure inter-temporal choices are special cases of risky inter-temporal choice. We tested this assumption by presenting a single group of participants with risky choices and inter-temporal choices. We then compared the performance of a model that is fit to both choice types simultaneously, with the performance of separate models fit to the risky choice and inter-temporal choice data. We find, using Bayesian model comparison, that the majority of participants are best fit by a single model that incorporates both risky and inter-temporal choices. This result supports the assumption that risky choices and inter-temporal choices may be special cases of risky inter-temporal choice. Our results also suggest that, under the conditions of our experiment, interpretation of monetary value is very similar in risky choices and inter-temporal choices.
Publisher: Elsevier BV
Date: 03-2018
DOI: 10.1016/J.COGPSYCH.2017.11.004
Abstract: With immediate repetition priming of forced choice perceptual identification, short prime durations produce positive priming (i.e., priming the target leads to higher accuracy, while priming the foil leads to lower accuracy). Many theories explain positive priming following short duration primes as reflecting increased perceptual fluency for the primed target (i.e., decreased identification latency). However, most studies only examine either accuracy or response times, rather than considering the joint constraints of response times and accuracy to properly address the role of decision biases and response caution. This is a critical oversight because several theories propose that the transition to negative priming following a long duration prime reflects a decision strategy to compensate for the effect of increased perceptual fluency. In contrast, the nROUSE model of Huber and O'Reilly (2003) explains this transition as reflecting perceptual habituation, and thus a change to perceptual disfluency. We confirmed this prediction by applying a sequential s ling model (the diffusion race model) to accuracy and response time distributions from a new single item same-different version of the priming task. In this way, we measured strategic biases and perceptual fluency in each condition for each subject. The nROUSE model was only applied to accuracy from the original forced-choice version of the priming task. This application of nROUSE produced separate predictions for each subject regarding the degree of fluency and disfluency in each condition, and these predictions were confirmed by the drift rate parameters (i.e., fluency) from the response time model in contrast to the threshold parameters (i.e., bias).
Publisher: PeerJ
Date: 24-01-2017
DOI: 10.7717/PEERJ.2921
Abstract: We investigated the relationship between psychometrically-defined schizotypy and the ability to detect a visual target pattern. Target detection is typically impaired by a surrounding pattern (context) with an orientation that is parallel to the target, relative to a surrounding pattern with an orientation that is orthogonal to the target (orientation-dependent contextual modulation). Based on reports that this effect is reduced in those with schizophrenia, we hypothesised that there would be a negative relationship between the relative score on psychometrically-defined schizotypy and the relative effect of orientation-dependent contextual modulation. We measured visual contrast detection thresholds and scores on the Oxford-Liverpool Inventory of Feelings and Experiences (O-LIFE) from a non-clinical s le ( N = 100). Contrary to our hypothesis, we find an absence of a monotonic relationship between the relative magnitude of orientation-dependent contextual modulation of visual contrast detection and the relative score on any of the subscales of the O-LIFE. The apparent difference of this result with previous reports on those with schizophrenia suggests that orientation-dependent contextual modulation may be an informative condition in which schizophrenia and psychometrically-defined schizotypy are dissociated. However, further research is also required to clarify the strength of orientation-dependent contextual modulation in those with schizophrenia.
Publisher: American Psychological Association (APA)
Date: 2016
DOI: 10.1037/XLM0000188
Abstract: Reasoning and inference are well-studied aspects of basic cognition that have been explained as statistically optimal Bayesian inference. Using a simplified experimental design, we conducted quantitative comparisons between Bayesian inference and human inference at the level of in iduals. In 3 experiments, with more than 13,000 participants, we asked people for prior and posterior inferences about the probability that 1 of 2 coins would generate certain outcomes. Most participants' inferences were inconsistent with Bayes' rule. Only in the simplest version of the task did the majority of participants adhere to Bayes' rule, but even in that case, there was a significant proportion that failed to do so. The current results highlight the importance of close quantitative comparisons between Bayesian inference and human data at the in idual-subject level when evaluating models of cognition.
Publisher: American Psychological Association (APA)
Date: 2011
DOI: 10.1037/A0022494
Publisher: Public Library of Science (PLoS)
Date: 12-12-2014
Publisher: Proceedings of the National Academy of Sciences
Date: 12-03-2018
Abstract: We describe and demonstrate an empirical strategy useful for discovering and replicating empirical effects in psychological science. The method involves the design of a metastudy, in which many independent experimental variables—that may be moderators of an empirical effect—are indiscriminately randomized. Radical randomization yields rich datasets that can be used to test the robustness of an empirical claim to some of the vagaries and idiosyncrasies of experimental protocols and enhances the generalizability of these claims. The strategy is made feasible by advances in hierarchical Bayesian modeling that allow for the pooling of information across unlike experiments and designs and is proposed here as a gold standard for replication research and exploratory research. The practical feasibility of the strategy is demonstrated with a replication of a study on subliminal priming.
Publisher: American Psychological Association (APA)
Date: 2014
DOI: 10.1037/A0035166
Publisher: SAGE Publications
Date: 23-04-2012
Abstract: A classic law of cognition is that forgetting curves are closely approximated by power functions. This law describes relations between different empirical dependent variables and the retention interval, and the precise form of the functional relation depends on the scale used to measure each variable. In the research reported here, we conducted a recognition task involving both short- and long-term probes. We discovered that formal memory-strength parameters from an exemplar-recognition model closely followed a power function of the lag between studied items and a test probe. The model accounted for rich sets of response time (RT) data at both in idual-subject and in idual-lag levels. Because memory strengths were derived from model fits to choices and RTs from in idual trials, the psychological power law was independent of the scale used to summarize the forgetting functions. Alternative models that assumed different functional relations or posited a separate fixed-strength working memory store fared considerably worse than the power-law model did in predicting the data.
Publisher: American Psychological Association (APA)
Date: 09-2018
DOI: 10.1037/BUL0000153
Abstract: Any mature field of research in psychology-such as short-term/working memory-is characterized by a wealth of empirical findings. It is currently unrealistic to expect a theory to explain them all theorists must satisfice with explaining a subset of findings. The aim of the present article is to make the choice of that subset less arbitrary and idiosyncratic than is current practice. We propose criteria for identifying benchmark findings that every theory in a field should be able to explain: Benchmarks should be reproducible, generalize across materials and methodological variations, and be theoretically informative. We propose a set of benchmarks for theories and computational models of short-term and working memory. The benchmarks are described in as theory-neutral a way as possible, so that they can serve as empirical common ground for competing theoretical approaches. Benchmarks are rated on three levels according to their priority for explanation. Selection and ratings of the benchmarks is based on consensus among the authors, who jointly represent a broad range of theoretical perspectives on working memory, and they are supported by a survey among other experts on working memory. The article is accompanied by a web page providing an open forum for discussion and for submitting proposals for new benchmarks and a repository for reference data sets for each benchmark. (PsycINFO Database Record
Publisher: American Psychological Association (APA)
Date: 11-2020
DOI: 10.1037/REV0000223
Publisher: Springer Science and Business Media LLC
Date: 28-06-2016
Publisher: Springer Science and Business Media LLC
Date: 26-06-2014
DOI: 10.3758/S13423-014-0675-5
Abstract: Zhang and Luck (Psychological Science, 20, 423-428, 2009) found that perceptual memories are lost over time via sudden death rather than gradual decay. However, they acknowledged that participants may have instead lost memory for the locations of objects. We required observers to recall only a single object. Although the paradigm eliminated the need to maintain object-location bindings, the possibility that observers would use verbal labels increased. To measure the precision of verbal labeling, we included explicit verbal-labeling and label-matching trials. We applied a model that measured the contributions of sudden death, gradual decay, and verbal labeling to recall. Our model-based evidence pointed to sudden death as the primary vehicle by which perceptual memories were lost. Crucially, however, the sudden-death hypothesis was favored only when the verbal-labeling component was included as part of the modeling. The results underscore the importance of taking into account the potential role of verbal-labeling processes in investigations of perceptual memory.
Publisher: Wiley
Date: 31-07-2018
DOI: 10.1111/COGS.12667
Abstract: How does the process of information transmission affect the cultural or linguistic products that emerge? This question is often studied experimentally and computationally via iterated learning, a procedure in which participants learn from previous participants in a chain. Iterated learning is a powerful tool because, when all participants share the same priors, the stationary distributions of the iterated learning chains reveal those priors. In many situations, however, it is unreasonable to assume that all participants share the same prior beliefs. We present four simulation studies and one experiment demonstrating that when the population of learners is heterogeneous, the behavior of an iterated learning chain can be unpredictable and is often systematically distorted by the learners with the most extreme biases. This results in group-level outcomes that reflect neither the behavior of any in iduals within the population nor the overall population average. We discuss implications for the use of iterated learning as a methodological tool as well as for the processes that might have shaped cultural and linguistic evolution in the real world.
Publisher: Springer Science and Business Media LLC
Date: 02-2009
DOI: 10.3758/BRM.41.1.154
Publisher: eLife Sciences Publications, Ltd
Date: 09-11-2021
DOI: 10.7554/ELIFE.72185
Abstract: Any large dataset can be analyzed in a number of ways, and it is possible that the use of different analysis strategies will lead to different results and conclusions. One way to assess whether the results obtained depend on the analysis strategy chosen is to employ multiple analysts and leave each of them free to follow their own approach. Here, we present consensus-based guidance for conducting and reporting such multi-analyst studies, and we discuss how broader adoption of the multi-analyst approach has the potential to strengthen the robustness of results and conclusions obtained from analyses of datasets in basic and applied research.
Publisher: Elsevier BV
Date: 10-2014
Publisher: Springer Science and Business Media LLC
Date: 16-11-2010
Publisher: Springer Science and Business Media LLC
Date: 24-04-2019
Publisher: Elsevier BV
Date: 2014
Publisher: American Psychological Association (APA)
Date: 2015
DOI: 10.1037/XLM0000083
Abstract: The strength of conclusions about the adoption of different categorization strategies-and their implications for theories about the cognitive and neural bases of category learning-depend heavily on the techniques for identifying strategy use. We examine performance in an often-used "information-integration" category structure and demonstrate that strategy identification is affected markedly by the range of models under consideration, the type of data collected, and model-selection techniques. We use a set of 27 potential models that represent alternative rule-based and information-integration categorization strategies. Our experimental paradigm includes the presentation of nonreinforced transfer stimuli that improve one's ability to discriminate among the predictions of alternative models. Our model-selection techniques incorporate uncertainty in the identification of in iduals as either rule-based or information-integration strategy users. Based on this analysis we identify 48% of participants as unequivocally using an information-integration strategy. However, adopting the standard practice of using a restricted set of models, restricted data, and ignoring the degree of support for a particular strategy, we would typically conclude that 89% of participants used an information-integration strategy. We discuss the implications of potentially erroneous strategy identification for the security of conclusions about the categorization capabilities of various participant and patient groups.
Publisher: Springer Science and Business Media LLC
Date: 12-2009
Publisher: Springer Science and Business Media LLC
Date: 23-03-2012
DOI: 10.3758/S13423-012-0236-8
Abstract: A classic question in cognitive psychology concerns the nature of memory search in short-term recognition. Despite its long history of investigation, however, there is still no consensus on whether memory search takes place serially or in parallel or is based on global access. In the present investigation, we formalize a variety of models designed to account for detailed response time distribution data in the classic Sternberg (Science 153: 652-654, 1966) memory-scanning task. The models vary in their mental architectures (serial exhaustive, parallel self-terminating, and global access). Furthermore, the component processes within the architectures that make match/mismatch decisions are formalized as linear ballistic accumulators (LBAs). In fast presentation rate conditions, the parallel and global access models provide far better accounts of the data than does the serial model. LBA drift rates are found to depend almost solely on the lag between study items and test probes, whereas response thresholds change with memory set size. Under slow presentation rate conditions, even simple versions of the serial-exhaustive model provide accounts of the data that are as good as those of the parallel and global access models. We provide alternative interpretations of the results in our General Discussion.
Publisher: Cambridge University Press (CUP)
Date: 2022
DOI: 10.1017/S0140525X21000479
Abstract: Generalization does not come from repeatedly observing phenomena in numerous settings, but from theories explaining what is general in those phenomena. Expecting future behavior to look like past observations is especially problematic in psychology, where behaviors change when people's knowledge changes. Psychology should thus focus on theories of people's capacity to create and apply new representations of their environments.
Publisher: American Psychological Association (APA)
Date: 2013
DOI: 10.1037/A0034247
Publisher: Elsevier BV
Date: 04-2017
Publisher: American Psychological Association (APA)
Date: 12-2019
DOI: 10.1037/XGE0000603
Abstract: We investigated previous findings suggesting a paradoxical inconsistency of people's beliefs and choices: When making decisions under uncertainty, people seem to both overestimate the probability of rare events in their judgments and underweight the probability of the same rare events in their choices. In our reexamination, we found that people's beliefs are consistent with their decisions, but they do not necessarily correspond with the environment. Both overestimation and underweighting of the rare event seemed to result from (most, but not all) participants' mistaken belief that they can infer and exploit sequential patterns in a static environment. In addition, we found that such inaccurate representations can be improved through incentives. Finally, detailed analysis suggested a mixture of in idual-level response patterns, which can give rise to an erroneous interpretation of group-level patterns. Our results offer an explanation for why beliefs and decisions can appear contradictory and present challenges to some current models of decisions under uncertainty. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Publisher: American Psychological Association (APA)
Date: 10-2015
DOI: 10.1037/DEC0000024
Publisher: Springer Science and Business Media LLC
Date: 09-10-2019
Publisher: SAGE Publications
Date: 14-11-2019
Abstract: When people make risky choices, two kinds of information are crucial: outcome values and outcome probabilities. Here, we demonstrate that the juncture at which value and probability information is provided has a fundamental effect on choice. Across four experiments involving 489 participants, we compared two decision-making scenarios: one in which value information was revealed during s ling ( standard) and one in which value information was revealed after s ling ( value ignorance). On average, participants made riskier choices when value information was provided after s ling. Moreover, parameter estimates from a hierarchical Bayesian implementation of cumulative-prospect theory suggested that participants overweighted rare events when value information was absent during s ling but did not overweight such events in the standard condition. This suggests that the impact of rare events on choice relies crucially on the timing of probability and value integration. We provide paths toward mechanistic explanations of our results based on frameworks that assume different underlying cognitive architectures.
Publisher: Frontiers Media SA
Date: 2012
Publisher: American Psychological Association (APA)
Date: 07-2019
DOI: 10.1037/XLM0000641
Abstract: Past research indicates that in iduals respond adaptively to contextual factors in multiattribute choice tasks. Yet it remains unclear how this adaptation is cognitively governed. In this article, empirically testable implementations of two prominent competing theoretical frameworks are developed and compared across two multiattribute choice experiments: the adaptive toolbox framework assuming discrete choice strategies and the adjustable spanner framework assuming one comprehensive adaptive strategy. Results from two experiments indicate that in the environments we tested, in which all cue information was presented openly, the toolbox makes better predictions than the adjustable spanner both in- and out-of-s le. Follow-up simulation studies indicate that it is difficult to discriminate the models based on choice outcomes alone but allowed the identification of a small subset of cases where the predictions of both models erged. Our results suggest that people adapt their decision strategies by flexibly switching between using as little information as possible and use of all of the available information. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Publisher: Elsevier BV
Date: 06-2016
Publisher: Elsevier BV
Date: 02-2020
Publisher: SAGE Publications
Date: 2015
DOI: 10.1080/17470218.2014.939665
Abstract: Sequential effects are ubiquitous in decision-making, but no more than in the absolute identification task where participants must identify stimuli from a set of items that vary on a single dimension. A number of competing explanations for these sequential effects have been proposed, and recently Matthews and Stewart [(2009a). The effect of inter-stimulus interval on sequential effects in absolute identification. The Quarterly Journal of Experimental Psychology, 62, 2014–2029] showed that manipulations of the time between decisions is useful in discriminating between these accounts. We use a Bayesian hierarchical regression model to show that inter-trial interval has an influence on behaviour when it varies across different blocks of trials, but not when it varies from trial to trial. We discuss the implications of both our and Matthews and Stewart's results on the effect of inter-trial interval for theories of sequential effects.
Publisher: American Psychological Association (APA)
Date: 2013
DOI: 10.1037/A0029667
Abstract: A classic distinction in perceptual information processing is whether stimuli are composed of separable dimensions, which are highly analyzable, or integral dimensions, which are processed holistically. Previous tests of a set of logical-rule models of classification have shown that separable-dimension stimuli are processed serially if the dimensions are spatially separated and as a mixture of serial and parallel processes if the dimensions are spatially overlapping (Fifić, Little, & Nosofsky, 2010 Little, Nosofsky, & Denton, 2011). In the current research, the logical-rule models are applied to predict response-time (RT) data from participants trained to classify integral-dimension color stimuli into rule-based categories. In dramatic contrast to the previous results for separable-dimension stimuli, analysis of the current data indicated that processing was best captured by a single-channel coactive model. The results converge with previous operations that suggest holistic processing of integral-dimension stimuli and demonstrate considerable generality for the application of the logical-rule models to predicting RT data from rule-based classification experiments.
Publisher: Springer Science and Business Media LLC
Date: 11-2009
Publisher: Elsevier BV
Date: 04-2011
Publisher: Royal Society of Chemistry (RSC)
Date: 2023
DOI: 10.1039/D2EY00077F
Abstract: The denitrification catalyst without metal active component has excellent anti-poisoning performance and stability.
Publisher: American Psychological Association (APA)
Date: 2014
DOI: 10.1037/A0035947
Abstract: The ability to trade accuracy for speed is fundamental to human decision making. The speed-accuracy trade-off (SAT) effect has received decades of study, and is well understood in relatively simple decisions: collecting more evidence before making a decision allows one to be more accurate but also slower. The SAT in more complex paradigms has been given less attention, largely due to limits in the models and statistics that can be applied to such tasks. Here, we have conducted the first analysis of the SAT in multiple signal processing, using recently developed technologies for measuring capacity that take into account both response time and choice probability. We show that the primary influence of caution in our redundant-target experiments is on the threshold amount of evidence required to trigger a response. However, in a departure from the usual SAT effect, we found that participants strategically ignored redundant information when they were forced to respond quickly, but only when the additional stimulus was reliably redundant. Interestingly, because the capacity of the system was severely limited on redundant-target trials, ignoring additional targets meant that processing was more efficient when making fast decisions than when making slow and accurate decisions, where participants' limited resources had to be ided between the 2 stimuli.
Publisher: Wiley
Date: 30-06-2017
DOI: 10.1002/BDM.2025
Publisher: SAGE Publications
Date: 06-04-2016
Abstract: The long-held popular notion of intuition has garnered much attention both academically and popularly. Although most people agree that there is such a phenomenon as intuition, involving emotionally charged, rapid, unconscious processes, little compelling evidence supports this notion. Here, we introduce a technique in which subliminal emotional information is presented to subjects while they make fully conscious sensory decisions. Our behavioral and physiological data, along with evidence-accumulator models, show that nonconscious emotional information can boost accuracy and confidence in a concurrent emotion-free decision task, while also speeding up response times. Moreover, these effects were contingent on the specific predictive arrangement of the nonconscious emotional valence and motion direction in the decisional stimulus. A model that simultaneously accumulates evidence from both physiological skin conductance and conscious decisional information provides an accurate description of the data. These findings support the notion that nonconscious emotions can bias concurrent nonemotional behavior—a process of intuition.
Publisher: Springer Science and Business Media LLC
Date: 08-02-2022
DOI: 10.1186/S41235-022-00364-Y
Abstract: In three experiments, we sought to understand when and why people use an algorithm decision aid. Distinct from recent approaches, we explicitly enumerate the algorithm’s accuracy while also providing summary feedback and training that allowed participants to assess their own skills. Our results highlight that such direct performance comparisons between the algorithm and the in idual encourages a strategy of selective reliance on the decision aid in iduals ignored the algorithm when the task was easier and relied on the algorithm when the task was harder. Our systematic investigation of summary feedback, training experience, and strategy hint manipulations shows that further opportunities to learn about the algorithm encourage not only increased reliance on the algorithm but also engagement in experimentation and verification of its recommendations. Together, our findings emphasize the decision-maker’s capacity to learn about the algorithm providing insights for how we can improve the use of decision aids.
Publisher: SAGE Publications
Date: 16-02-2021
Abstract: Science progresses by finding and correcting problems in theories. Good theories are those that help facilitate this process by being hard to vary: They explain what they are supposed to explain, they are consistent with other good theories, and they are not easily adaptable to explain anything. Here we argue that, rather than a lack of distinction between exploratory and confirmatory research, an abundance of flexible theories is a better explanation for the current replicability problems of psychology. We also explain why popular methods-oriented solutions fail to address the real problem of flexibility. Instead, we propose that a greater emphasis on theory criticism by argument might improve replicability.
Publisher: Proceedings of the National Academy of Sciences
Date: 27-10-2014
Abstract: One of the more intriguing but controversial ideas in psychology is that unconscious information can influence our decisions without us even knowing it. Here, we explicitly tested these controversial ideas with a novel behavioral task and computational models of decision-making. We report that unconscious information can be accumulated in a similar manner but less effectively than conscious information. However, unlike conscious information, unconscious information does not seem to boost decision confidence. Our findings cannot be accounted for using existing models of priming or adaptation.
Publisher: Informa UK Limited
Date: 26-08-2022
Publisher: Springer Science and Business Media LLC
Date: 18-11-2014
DOI: 10.3758/S13421-014-0487-X
Abstract: The slots model of visual working memory, despite its simplicity, has provided an excellent account of data across a number of change detection experiments. In the current research, we provide a new test of the slots model by investigating its ability to account for the increased prevalence of errors when there is a potential for confusion about the location in which items are presented during study. We assume that such location errors in the slots model occur when the feature information for an item in one location is swapped with the feature information for an item in another location. We show that such a model predicts two factors that will influence the extent to which location errors occur: (1) whether the test item changes to an "external" item not presented at study, or to an "internal" item presented at another location during study, and (2) the number of items in the study array. We manipulate these factors in an experiment, and show that the slots model with location errors fails to provide a satisfactory account of the observed data.
Location: United Kingdom of Great Britain and Northern Ireland
Location: United Kingdom of Great Britain and Northern Ireland
Location: United Kingdom of Great Britain and Northern Ireland
Start Date: 2017
End Date: 06-2020
Amount: $295,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 2016
End Date: 06-2019
Amount: $290,558.00
Funder: Australian Research Council
View Funded ActivityStart Date: 04-2013
End Date: 12-2017
Amount: $374,943.00
Funder: Australian Research Council
View Funded ActivityStart Date: 04-2013
End Date: 02-2018
Amount: $229,000.00
Funder: Australian Research Council
View Funded Activity