ORCID Profile
0000-0002-9060-6756
Current Organisation
The University of Newcastle
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
Publisher: Springer Science and Business Media LLC
Date: 11-2004
DOI: 10.3758/BF03206555
Abstract: The most powerful tests of response time (RT) models often involve the whole shape of the RT distribution, thus avoiding mimicking that can occur at the level of RT means and variances. Nonparametric distribution estimation is, in principle, the most appropriate approach, but such estimators are sometimes difficult to obtain. On the other hand, distribution fitting, given an algebraic function, is both easy and compact. We review the general approach to performing distribution fitting with maximum likelihood (ML) and a method based on quantiles (quantile maximum probability, QMP). We show that QMP has both small bias and good efficiency when used with common distribution functions (the ex-Gaussian, Gumbel, lognormal, Wald, and Weibull distributions). In addition, we review some software packages performing ML (PASTIS, QMPE, DISFIT, and MATHEMATICA) and compare their results. In general, the differences between packages have little influence on the optimal solution found, but the form of the distribution function has: Both the lognormal and the Wald distributions have non-linear dependencies between the parameter estimates that tend to increase the overall bias in parameter recovery and to decrease efficiency. We conclude by laying out a few pointers on how to relate descriptive models of RT to cognitive models of RT. A program that generated the random deviates used in our studies may be downloaded from rchive/.
Publisher: Center for Open Science
Date: 24-05-2023
Abstract: Whenever someone in a team tries to help others, it is crucial that they have some understanding of other team members' goals. In modern teams, this applies equally to human and artificial ("bot") assistants. Understanding when one does not know something is crucial for stopping the execution of inappropriate behavior and, ideally, attempting to learn more appropriate actions. From a statistical point of view, this can be translated to assessing whether none of the hypotheses in a considered set is correct. Here we investigate a novel approach for making this assessment based on monitoring the maximum a posteriori probability (MAP) of a set of candidate hypotheses as new observations arrive. Simulation studies suggest that this is a promising approach, however, we also caution that there may be cases where this is more challenging. The problem we study and the solution we propose is a general one, which applies well beyond human-bot teaming, including for ex le the scientific process of theory development.
Publisher: Springer Science and Business Media LLC
Date: 03-05-2018
DOI: 10.1007/S00520-018-4226-X
Abstract: To explore in a s le of medical oncology outpatients and their nominated support persons (SPs): (1) the relative influence of pain, consciousness and life extension on end-of-life choices using a discrete choice experiment (DCE) (2) the extent to which SPs can predict the choices of index patients and (3) whether having a previous end-of-life discussion was associated with dyad agreement. Adult medical oncology patients and their SPs were approached for consent to complete a survey containing a DCE. Participants chose between three unlabelled care scenarios characterised by three attributes: pain (mild, moderate or severe), consciousness (some, half or most of time) and extension of life (1, 2 or 3 weeks). Respondents selected (1) most-preferred and (2) least-preferred scenarios within each question. SPs answered the same questions but from patient's perspective. A total of 110 patients and 64 SPs responded overall (42 matched patient-SP dyads). For patients, pain was the most influential predictor of most- and least-preferred scenarios (z = 12.5 and z = 12.9). For SPs, pain was the only significant predictor of most and least-preferred scenarios (z = 9.7 and z = 11.5). Dyad agreement was greater for choices about least- (69%) compared to most-preferred scenarios (55%). Agreement was slightly higher for dyads reporting a previous EOL discussion (68 versus 48% p = 0.065). Patients and SPs place significant value on avoiding severe pain when making end-of-life choices, over and above level of consciousness or life extension. People's views about end-of-life scenarios they most as well as least prefer should be sought.
Publisher: Springer Science and Business Media LLC
Date: 05-2004
DOI: 10.3758/BF03195574
Abstract: We describe and test quantile maximum probability estimator (QMPE), an open-source ANSI Fortran 90 program for response time distribution estimation. QMPE enables users to estimate parameters for the ex-Gaussian and Gumbel (1958) distributions, along with three "shifted" distributions (i.e., distributions with a parameter-dependent lower bound): the Lognormal, Wald, and Weibul distributions. Estimation can be performed using either the standard continuous maximum likelihood (CML) method or quantile maximum probability (QMP Heathcote & Brown, in press). We review the properties of each distribution and the theoretical evidence showing that CML estimates fail for some cases with shifted distributions, whereas QMP estimates do not. In cases in which CML does not fail, a Monte Carlo investigation showed that QMP estimates were usually as good, and in some cases better, than CML estimates. However, the Monte Carlo study also uncovered problems that can occur with both CML and QMP estimates, particularly when s les are small and skew is low, highlighting the difficulties of estimating distributions with parameter-dependent lower bounds.
Publisher: Wiley
Date: 10-2019
DOI: 10.1111/IMJ.14456
Abstract: Only 2-3% of cancer patients enrol in a trial. We surveyed patients' willingness to change clinician or treating centre, or to travel, to participate in trials, to improve trial recruitment. Of 188 respondents, 79% were willing to participate in a trial in at least one scenario. Increasing travel time, change in oncologist, private health insurance and out of pocket expenses decreased likelihood of joining a trial. Rural and regional patients, and those from lower socio-economic areas, were more willing to travel. To optimise access to trials, clinicians should refer within and between institutions.
Publisher: American Psychological Association (APA)
Date: 2005
Publisher: American Psychological Association (APA)
Date: 04-2008
Publisher: Elsevier BV
Date: 02-2017
Publisher: Springer Science and Business Media LLC
Date: 04-2023
DOI: 10.1186/S13750-023-00299-X
Abstract: Mammals, globally, are facing population declines. Protecting and breeding threatened populations inside predator-free havens and translocating them back to the wild is commonly viewed as a solution. These approaches can expose predator-naïve animals to predators they have never encountered and as a result, many conservation projects have failed due to the predation of in iduals that lacked appropriate anti-predator responses. Hence, robust ways to measure anti-predator responses are urgently needed to help identify naïve populations at risk, to select appropriate animals for translocation, and to monitor managed populations for changes in anti-predator traits. Here, we undertake a systematic review that collates existing behavioural assays of anti-predator responses and identifies assay types and predator cues that provoke the greatest behavioural responses. We retrieved articles from academic bibliographic databases and grey literature sources (such as government and conservation management reports), using a Boolean search string. Each article was screened against eligibility criteria determined using the PICO (Population–Intervention–Comparator–Outcome) framework. Using data extracted from each article, we mapped all known behavioural assays for quantifying anti-predator responses in mammals and examined the context in which each assay has been implemented (e.g., species tested, predator cue characteristics). Finally, with mixed effects modelling, we determined which of these assays and predator cue types elicit the greatest behavioural responses based on standardised difference in response between treatment and control groups. We reviewed 5168 articles, 211 of which were eligible, constituting 1016 studies on 126 mammal species, a quarter of which are threatened by invasive species. We identified six major types of behavioural assays: behavioural focals, capture probability, feeding station, flight initiation distance, giving-up density, and stimulus presentations. Across studies, there were five primary behaviours measured: activity, escape, exploration, foraging, and vigilance. These behaviours yielded similar effect sizes across studies. With regard to study design, however, studies that used natural olfactory cues tended to report larger effect sizes than those that used artificial cues. Effect sizes were larger in studies that analysed sexes in idually, rather than combining males and females. Studies that used ‘blank’ control treatments (the absence of a stimulus) rather than a treatment with a control stimulus had higher effect sizes. Although many studies involved repeat measures of known in iduals, only 15.4% of these used their data to calculate measures of in idual repeatability. Our review highlights important aspects of experimental design and reporting that should be considered. Where possible, studies of anti-predator behaviour should use appropriate control treatments, analyse males and females separately, and choose organic predator cues. Studies should also look to report the in idual repeatability of behavioural traits, and to correctly identify measures of uncertainty (error bars). The review highlights robust methodology, reveals promising techniques on which to focus future assay development, and collates relevant information for conservation managers.
Publisher: Elsevier BV
Date: 12-2002
Publisher: Elsevier BV
Date: 03-2017
Publisher: American Psychological Association (APA)
Date: 2015
DOI: 10.1037/SER0000011
Abstract: With the growth of client-centered and patient-as-consumer approaches to care, understanding the preferences of psychologists' patients has never been more important. Traditional methods for measuring preference, such as Likert-type rating scales, suffer from well-known limitations, including subjectivity and positive bias. Best-worst scaling (BWS) provides an opportunity to address some of these limitations. Despite the growing use of BWS to measure preference in other areas, BWS methods are not being used in the study of psychologists' patients. We demonstrate BWS data collection and analysis. With a s le of only 31 clients from 2 Australian psychology practices, we show the strength of preference for different aspects of psychologists' appointments can be measured accurately. Additionally, the inclusion of readily available timing data from responses improved measurement sensitivity and statistical power.
Publisher: Wiley
Date: 15-05-2013
DOI: 10.1111/COGS.12042
Abstract: The ability to imagine objects undergoing rotation (mental rotation) improves markedly with practice, but an explanation of this plasticity remains controversial. Some researchers propose that practice speeds up the rate of a general-purpose rotation algorithm. Others maintain that performance improvements arise through the adoption of a new cognitive strategy-repeated exposure leads to rapid retrieval from memory of the required response to familiar mental rotation stimuli. In two experiments we provide support for an integrated explanation of practice effects in mental rotation by combining behavioral and EEG measures in a way that provides more rigorous inference than is available from either measure alone. Before practice, participants displayed two well-established signatures of mental rotation: Both response time and EEG negativity increased linearly with rotation angle. After extensive practice with a small set of stimuli, both signatures of mental rotation had all but disappeared. In contrast, after the same amount of practice with a much larger set both signatures remained, even though performance improved markedly. Taken together, these results constitute a reversed association, which cannot arise from variation in a single cause, and so they provide compelling evidence for the existence of two routes to expertise in mental rotation. We also found novel evidence that practice with the large but not the small stimulus set increased the magnitude of an early visual evoked potential, suggesting increased rotation speed is enabled by improved efficiency in extracting three-dimensional information from two-dimensional stimuli.
Publisher: Springer Science and Business Media LLC
Date: 13-08-2008
DOI: 10.1007/S00426-008-0158-2
Abstract: Identification accuracy for sets of perceptually discriminable stimuli ordered on a single dimension (e.g., line length) is remarkably low, indicating a fundamental limit on information processing capacity. This surprising limit has naturally led to a focus on measuring and modeling choice probability in absolute identification research. We show that choice response time (RT) results can enrich our understanding of absolute identification by investigating dissociation between RT and accuracy as a function of stimulus spacing. The dissociation is predicted by the SAMBA model of absolute identification (Brown, Marley, Dockin, & Heathcote, 2008), but cannot easily be accommodated by other theories. We show that SAMBA provides an accurate, parameter free, account of the dissociation that emerges from the architecture of the model and the physical attributes of the stimuli, rather than through numerical adjustment. This violation of the pervasive monotonic relationship between RT and accuracy has implications for model development, which are discussed.
Publisher: Hindawi Limited
Date: 2012
DOI: 10.1155/2012/625476
Abstract: Dynamic balancing of game difficulty can help cater for different levels of ability in players. However, performance in some game tasks depends on not only the player's ability but also their desire to take risk. Taking or avoiding risk can offer players its own reward in a game situation. Furthermore, a game designer may want to adjust the mechanics differently for a risky, high ability player, as opposed to a risky, low ability player. In this work, we describe a novel modelling technique known as particle filtering which can be used to model various levels of player ability while also considering the player's risk profile. We demonstrate this technique by developing a game challenge where players are required to make a decision between a number of possible alternatives where only a single alternative is correct. Risky players respond faster but with more likelihood of failure. Cautious players wait longer for more evidence, increasing their likelihood of success, but at the expense of game time. By gathering empirical data for the player's response time and accuracy, we develop particle filter models. These models can then be used in real-time to categorise players into different ability and risk-taking levels.
Publisher: Elsevier BV
Date: 12-2023
Publisher: American Psychological Association (APA)
Date: 2014
DOI: 10.1037/A0037771
Abstract: Jones and Dzhafarov (2014) provided a useful service in pointing out that some assumptions of modern decision-making models require additional scrutiny. Their main result, however, is not surprising: If an infinitely complex model was created by assigning its parameters arbitrarily flexible distributions, this new model would be able to fit any observed data perfectly. Such a hypothetical model would be unfalsifiable. This is exactly why such models have never been proposed in over half a century of model development in decision making. Additionally, the main conclusion drawn from this result-that the success of existing decision-making models can be attributed to assumptions about parameter distributions-is wrong. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
Publisher: American Psychological Association (APA)
Date: 2005
DOI: 10.1037/0278-7393.31.4.587
Abstract: Investigations of decision making have typically assumed stationarity, even though commonly observed "context effects" are dynamic by definition. Mirror effects are an important class of context effects that can be explained by changes in participants' decision criteria. When easy and difficult conditions are blocked alternately and a mirror effect is observed, participants must repeatedly change their decision criteria. The authors investigated the time course of these criterion changes and observed the buildup of mirror effects on a trial-by-trial basis. The data are consistent with slow, systematic changes in decision criteria that lag behind stimulus changes. The length of this lag is considerable: analysis of a simple dynamic signal-detection model suggests participants take an average of around 14 trials to adjust to new decision environments. This trial-level measurement of experimentally induced changes has implications for traditional blockwise analyses of data and for models of decision making.
Publisher: American Psychological Association (APA)
Date: 04-2022
DOI: 10.1037/REV0000351
Abstract: Many psychological experiments have subjects repeat a task to gain the statistical precision required to test quantitative theories of psychological performance. In such experiments, time-on-task can have sizable effects on performance, changing the psychological processes under investigation. Most research has either ignored these changes, treating the underlying process as static, or sacrificed some psychological content of the models for statistical simplicity. We use particle Markov chain Monte-Carlo methods to study psychologically plausible time-varying changes in model parameters. Using data from three highly cited experiments, we find strong evidence in favor of a hidden Markov switching process as an explanation of time-varying effects. This embodies the psychological assumption of "regime switching," with subjects alternating between different cognitive states representing different modes of decision-making. The switching model explains key long- and short-term dynamic effects in the data. The central idea of our approach can be applied quite generally to quantitative psychological theories, beyond the models and datasets that we investigate. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
Publisher: Center for Open Science
Date: 24-02-2023
Abstract: Threatened species monitoring can produce enormous quantities of acoustic and visual recordings which need to be searched for animal detections. Coding the data is extremely time-consuming for humans and even though machine algorithms are emerging as a useful tool to tackle this daunting task, they too require large amounts of known detections for training. Citizen scientists are often recruited via crowd-sourcing to assist. However, the results of their coding can be difficult to interpret because citizen scientists lack comprehensive training and typically each code only a small fraction of the full data set. Competence may vary between citizen scientists, but without knowing the ground truth of the data set, it is difficult to identify which scientists are most competent. We used a quantitative cognitive model, cultural consensus theory, to analyze both empirical and simulated data from a crowdsourced analysis of audio recordings of Australian frogs. Several hundred citizen scientists were asked whether the calls of nine frog species were present on 1,260 brief audio recordings, though most only coded a small fraction of these recordings. Through modeling, characteristics of both the scientist cohort and the recordings were estimated, to identify trends in competence in the former and frog calls in the latter. We then compared the model's output to expert coding of the recordings and found agreement between the cohort's consensus and the expert evaluation. This finding adds to the evidence that crowdsourced analyses can be utilised to understand large-scale datasets, even when the ground truth of the dataset is unknown. The model-based analysis provides a promising tool to screen large data sets prior to investing expert time, and allocate resources more efficiently when recruiting citizen scientists or training classification algorithms.
Publisher: American Psychological Association (APA)
Date: 2016
DOI: 10.1037/XLM0000188
Abstract: Reasoning and inference are well-studied aspects of basic cognition that have been explained as statistically optimal Bayesian inference. Using a simplified experimental design, we conducted quantitative comparisons between Bayesian inference and human inference at the level of in iduals. In 3 experiments, with more than 13,000 participants, we asked people for prior and posterior inferences about the probability that 1 of 2 coins would generate certain outcomes. Most participants' inferences were inconsistent with Bayes' rule. Only in the simplest version of the task did the majority of participants adhere to Bayes' rule, but even in that case, there was a significant proportion that failed to do so. The current results highlight the importance of close quantitative comparisons between Bayesian inference and human data at the in idual-subject level when evaluating models of cognition.
Publisher: Springer Science and Business Media LLC
Date: 03-02-2012
DOI: 10.3758/S13423-012-0216-Z
Abstract: Decisions between multiple alternatives typically conform to Hick's Law: Mean response time increases log-linearly with the number of choice alternatives. We recently demonstrated context effects in Hick's Law, showing that patterns of response latency and choice accuracy were different for easy versus difficult blocks. The context effect explained previously observed discrepancies in error rate data and provided a new challenge for theoretical accounts of multialternative choice. In the present article, we propose a novel approach to modeling context effects that can be applied to any account that models the speed-accuracy trade-off. The core element of the approach is "optimality" in the way an experimental participant might define it: minimizing the total time spent in the experiment, without making too many errors. We show how this approach can be included in an existing Bayesian model of choice and highlight its ability to fit previous data as well as to predict novel empirical context effects. The model is shown to provide better quantitative fits than a more flexible heuristic account.
Publisher: Elsevier BV
Date: 06-2013
Publisher: Elsevier BV
Date: 11-2008
DOI: 10.1016/J.COGPSYCH.2007.12.002
Abstract: We propose a linear ballistic accumulator (LBA) model of decision making and reaction time. The LBA is simpler than other models of choice response time, with independent accumulators that race towards a common response threshold. Activity in the accumulators increases in a linear and deterministic manner. The simplicity of the model allows complete analytic solutions for choices between any number of alternatives. These solutions (and freely-available computer code) make the model easy to apply to both binary and multiple choice situations. Using data from five previously published experiments, we demonstrate that the LBA model successfully accommodates empirical phenomena from binary and multiple choice tasks that have proven difficult for other theoretical accounts. Our results are encouraging in a field beset by the tradeoff between complexity and completeness.
Publisher: American Psychological Association (APA)
Date: 05-2016
DOI: 10.1037/NEU0000257
Publisher: SAGE Publications
Date: 04-11-2017
Abstract: Parenthood is central to the personal and social identity of many people. For in iduals with psychotic disorders, parenthood is often associated with formidable challenges. We aimed to identify predictors of adequate parenting among parents with psychotic disorders. Data pertaining to 234 parents with psychotic disorders living with dependent children were extracted from a population-based prevalence study, the 2010 second Australian national survey of psychosis, and analysed using confirmatory factor analysis. Parenting outcome was defined as quality of care of children, based on participant report and interviewer enquiry/exploration, and included level of participation, interest and competence in childcare during the last 12 months. Five hypothesis-driven latent variables were constructed and labelled psychosocial support, illness severity, substance abuse/dependence, adaptive functioning and parenting role. Importantly, 75% of participants were not identified to have any dysfunction in the quality of care provided to their child(ren). Severity of illness and adaptive functioning were reliably associated with quality of childcare. Psychosocial support, substance abuse/dependence and parenting role had an indirect relationship to the outcome variable via their association with either severity of illness and/or adaptive functioning. The majority of parents in the current s le provided adequate parenting. However, greater symptom severity and poorer adaptive functioning ultimately leave parents with significant difficulties and in need of assistance to manage their parenting obligations. As symptoms and functioning can change episodically for people with psychotic illness, provision of targeted and flexible support that can deliver temporary assistance during times of need is necessary. This would maximise the quality of care provided to vulnerable children, with potential long-term benefits.
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 02-04-2014
DOI: 10.1167/14.4.2
Publisher: Springer Science and Business Media LLC
Date: 25-11-2005
Publisher: Springer Science and Business Media LLC
Date: 30-10-2015
Publisher: Informa UK Limited
Date: 07-2004
Publisher: Springer Science and Business Media LLC
Date: 02-2006
DOI: 10.3758/BF03193827
Abstract: When people switch between two tasks, their performance on each is worse than when they perform that task in isolation. One theory of this "switch cost" is the failure-to-engage (FTE) theory, which posits that observed responses are a simple mixture of prepared and unprepared response strategies. The probability that participants use prepared processes can be manipulated experimentally (e.g., by changing preparation time). The FTE theory is a binary mixture model and therefore makes a strong prediction about the existence of fixed points in response time distributions. We found evidence contradicting this prediction, using data from 20 participants in a standard task-switching paradigm. In this article, we examine reasons for the failure of the FTE theory, and we demonstrate that a generalized version of FTE theory accommodates our data.
Publisher: Springer Science and Business Media LLC
Date: 08-11-2019
Publisher: SAGE Publications
Date: 2007
DOI: 10.1111/J.1467-9280.2007.01846.X
Abstract: In dynamic decision-making environments, observers must continuously adjust their decision-making strategies. Previous research has focused on internal fluctuations in decision mechanisms, without regard to how these changes are induced by environmental changes. We developed a simple paradigm in which we manipulated task difficulty, thereby inducing changes in decision processes. We applied this paradigm to recognition memory, manipulating task difficulty by changing the similarity of lures to targets. More difficult decision environments caused participants to make more careful decisions, but these changes did not appear immediately. We propose a simple theoretical account for these data, using a dynamic version of signal detection theory fitted to in idual subjects. Our model represents a significant departure from existing models because it incorporates subject-controlled parameters that may adjust over time in response to environmental changes.
Publisher: Elsevier BV
Date: 12-2016
DOI: 10.1016/J.CORTEX.2016.09.023
Abstract: Functional neuroimaging data indicate the dorsal striatum is engaged when people are required to vary the cautiousness of their decisions, by emphasizing the speed or accuracy of responding in laboratory-based decision tasks. However, the functional contribution of the striatum to decision making is unknown. In the current study we tested patients with focal ischemic lesions of the dorsal striatum and matched non-lesion control participants on a speed-accuracy tradeoff (SAT) task. Analysis using a computational model of response selection in a competitive and time-pressured context indicated that the decisions of patients with striatal lesions were less cautious than those of matched controls. This deficit was most prominent when the accuracy of decisions was emphasized. The results are consistent with the hypothesis that the striatum plays an important role in strategically setting response caution, an essential function for flexible behavior.
Publisher: Public Library of Science (PLoS)
Date: 08-10-2012
Publisher: Center for Open Science
Date: 29-01-2023
Abstract: Predictive inference is an important cognitive function and there are many tasks which measure it, and the error driven learning that underpins it. Context is a key contribution to this learning, with different contexts requiring different learning strategies. A factor not often considered however, is the conditions and time-frame over which a model of that context is developed. This study required participants to learn under two changing, unsignalled contexts with opposing optimal responses to large errors - change-points and oddballs. The changes in context occurred under two task structures: 1) a fixed task structure, with consecutive, short blocks of each context, and 2) a random task structure, with the context randomly selected for each new block. Through this design we examined the conditions under which learning contexts can be differentiated from each other, and the time-frame over which that learning occurs. We found that participants responded in accordance with the optimal strategy for each contexts, and did so within a short period of time, over very few meaningful errors. We further found that the responses became more optimal throughout the experiment, but only for periods of context consistency (the fixed task structure), and if the first experienced context involved meaningful errors. These results show that people will continue to refine their model of the environment across multiple trials and blocks, leading to more context-appropriate responding - but only in certain conditions. This highlights the importance of considering the task structure, and the time-frames of model development those patterns may encourage. This has implications for interpreting differences in learning across different contexts
Publisher: Springer Science and Business Media LLC
Date: 06-2000
DOI: 10.3758/BF03212979
Abstract: The power function is treated as the law relating response time to practice trials. However, the evidence for a power law is flawed, because it is based on averaged data. We report a survey that assessed the form of the practice function for in idual learners and learning conditions in paradigms that have shaped theories of skill acquisition. We fit power and exponential functions to 40 sets of data representing 7,910 learning series from 475 subjects in 24 experiments. The exponential function fit better than the power function in all the unaveraged data sets. Averaging produced a bias in favor of the power function. A new practice function based on the exponential, the APEX function, fit better than a power function with an extra, preexperimental practice parameter. Clearly, the best candidate for the law of practice is the exponential or APEX function, not the generally accepted power function. The theoretical implications are discussed.
Publisher: Springer Science and Business Media LLC
Date: 27-11-2017
DOI: 10.1038/S41598-017-16694-7
Abstract: We investigate a question relevant to the psychology and neuroscience of perceptual decision-making: whether decisions are based on steadily accumulating evidence, or only on the most recent evidence. We report an empirical comparison between two of the most prominent ex les of these theoretical positions, the diffusion model and the urgency-gating model, via model-based qualitative and quantitative comparisons. Our findings support the predictions of the diffusion model over the urgency-gating model, and therefore, the notion that evidence accumulates without much decay. Gross qualitative patterns and fine structural details of the data are inconsistent with the notion that decisions are based only on the most recent evidence. More generally, we discuss some strengths and weaknesses of scientific methods that investigate quantitative models by distilling the formal models to qualitative predictions.
Publisher: Frontiers Media SA
Date: 2013
Publisher: Elsevier BV
Date: 05-2013
Publisher: Society for Neuroscience
Date: 30-11-2011
DOI: 10.1523/JNEUROSCI.2924-11.2011
Abstract: Trial-to-trial variability in decision making can be caused by variability in information processing as well as by variability in response caution. In this paper, we study which neural components code for trial-to-trial adjustments in response caution using a new computational approach that quantifies response caution on a single-trial level. We found that the frontostriatal network updates the amount of response caution. In particular, when human participants were required to respond quickly, we found a positive correlation between trial-to-trial fluctuations in response caution and the hemodynamic response in the presupplementary motor area and dorsal anterior cingulate. In contrast, on trials that required a change from a speeded response mode to a more accurate response mode or vice versa, we found a positive correlation between response caution and hemodynamic response in the anterior cingulate proper. These results indicate that for each decision, response caution is set through corticobasal ganglia functioning, but that in idual choices differ according to the mechanisms that trigger changes in response caution.
Publisher: Frontiers Media SA
Date: 2010
Publisher: Proceedings of the National Academy of Sciences
Date: 23-08-2010
Abstract: When people make decisions they often face opposing demands for response speed and response accuracy, a process likely mediated by response thresholds. According to the striatal hypothesis , people decrease response thresholds by increasing activation from cortex to striatum, releasing the brain from inhibition. According to the STN hypothesis , people decrease response thresholds by decreasing activation from cortex to subthalamic nucleus (STN) a decrease in STN activity is likewise thought to release the brain from inhibition and result in responses that are fast but error-prone. To test these hypotheses—both of which may be true—we conducted two experiments on perceptual decision making in which we used cues to vary the demands for speed vs. accuracy. In both experiments, behavioral data and mathematical model analyses confirmed that instruction from the cue selectively affected the setting of response thresholds. In the first experiment we used ultra-high-resolution 7T structural MRI to locate the STN precisely. We then used 3T structural MRI and probabilistic tractography to quantify the connectivity between the relevant brain areas. The results showed that participants who flexibly change response thresholds (as quantified by the mathematical model) have strong structural connections between presupplementary motor area and striatum. This result was confirmed in an independent second experiment. In general, these findings show that in idual differences in elementary cognitive tasks are partly driven by structural differences in brain connectivity. Specifically, these findings support a cortico-striatal control account of how the brain implements adaptive switches between cautious and risky behavior.
Publisher: American Psychological Association (APA)
Date: 2012
DOI: 10.1037/A0026065
Abstract: Two Bayesian observer models were recently proposed to account for data from the Eriksen flanker task, in which flanking items interfere with processing of a central target. One model assumes that interference stems from a perceptual bias to process nearby items as if they are compatible, and the other assumes that the interference is due to spatial uncertainty in the visual system (Yu, Dayan, & Cohen, 2009). Both models were shown to produce one aspect of the empirical data, the below-chance dip in accuracy for fast responses to incongruent trials. However, the models had not been fit to the full set of behavioral data from the flanker task, nor had they been contrasted with other models. The present study demonstrates that neither model can account for the behavioral data as well as a comparison spotlight-diffusion model. Both observer models missed key aspects of the data, challenging the validity of their underlying mechanisms. Analysis of a new hybrid model showed that the shortcomings of the observer models stem from their assumptions about visual processing, not the use of a Bayesian decision process.
Publisher: Springer Science and Business Media LLC
Date: 09-10-2013
DOI: 10.3758/S13428-012-0264-3
Abstract: Many psychological experiments require participants to complete lots of trials in a monotonous task, which often induces boredom. An increasingly popular approach to alleviate such boredom is to incorporate gamelike features into standard experimental tasks. Games are assumed to be interesting and, hence, motivating, and better motivated participants might produce better data (with fewer lapses in attention and greater accuracy). Despite its apparent prevalence, the assumption that gamelike features improve data is almost completely untested. We test this assumption by presenting a choice task and a change detection task in both gamelike and standard forms. Response latency, accuracy, and overall task performance were unchanged by gamelike features in both experiments. We present a novel cognitive model for the choice task, based on particle filtering, to decorrelate the dependent variables and measure performance in a more psychologically meaningful manner. The model-based analyses are consistent with the hypothesis that gamelike features did not alter cognition. A postexperimental questionnaire indicated that the gamelike version provided a more positive and enjoyable experience for participants than the standard task, even though this subjective experience did not translate into data effects. Although our results hold only for the two experiments examined, the gamelike features we incorporated into both tasks were typical of-and at least as salient and interesting as those usually used by-experimental psychologists. Our results suggest that modifying an experiment to include gamelike features, while leaving the basic task unchanged, may not improve the quality of the data collected, but it may provide participants with a better experimental experience.
Publisher: American Psychological Association (APA)
Date: 10-2015
DOI: 10.1037/A0039656
Abstract: Trueblood, Brown, and Heathcote (2014) developed a new model, called the multiattribute linear ballistic accumulator (MLBA), to explain contextual preference reversals in multialternative choice. MLBA was shown to provide good accounts of human behavior through both qualitative analyses and quantitative fitting of choice data. Tsetsos, Chater, and Usher (2015) investigated the ability of MLBA to simultaneously capture 3 prominent context effects (attraction, compromise, and similarity). They concluded that MLBA must set a "fine balance" of competing forces to account for all 3 effects simultaneously and that its predictions are sensitive to the position of the stimuli in the attribute space. Through a new experiment, we show that the 3 effects are very fragile and that only a small subset of people shows all 3 simultaneously. Thus, the predictions that Tsetsos et al. generated from the MLBA model turn out to match closely real data in a new experiment. Support for these predictions provides strong evidence for the MLBA. A corollary is that a model that can "robustly" capture all 3 effects simultaneously is not necessarily a good model. Rather, a good model captures patterns found in human data, but cannot accommodate patterns that are not found.
Publisher: American Psychological Association (APA)
Date: 02-2020
DOI: 10.1037/XLM0000725
Abstract: Theories of perceptual decision making have been dominated by the idea that evidence accumulates in favor of different alternatives until some fixed threshold amount is reached, which triggers a decision. Recent theories have suggested that these thresholds may not be fixed during each decision but change as time passes. These collapsing thresholds can improve performance in particular decision environments, but reviews of data from typical decision-making paradigms have failed to support collapsing thresholds. We designed three experiments to test collapsing threshold assumptions in decision environments specifically tailored to make them optimal. An emphasis on decision speed encouraged the adoption of collapsing thresholds-most strongly through the use of response deadlines but also through instruction to a lesser extent-but setting an explicit goal of reward rate optimality through both instructions and task design did not. Our results suggest that collapsing thresholds models of decision-making are inconsistent with human behaviour even in some situations where there are normative motivations for these models. (PsycINFO Database Record (c) 2020 APA, all rights reserved).
Publisher: American Psychological Association (APA)
Date: 2013
DOI: 10.1037/A0032222
Publisher: Elsevier BV
Date: 08-2017
Publisher: SAGE Publications
Date: 22-04-2013
Abstract: Context effects—preference changes that depend on the availability of other options—have attracted a great deal of attention among consumer researchers studying high-level decision tasks. In the experiments reported here, we showed that these effects also arise in simple perceptual-decision-making tasks. This finding casts doubt on explanations limited to consumer choice and high-level decisions, and it indicates that context effects may be amenable to a general explanation at the level of the basic decision process. We demonstrated for the first time that three important context effects from the preferential-choice literature—similarity, attraction, and compromise effects—all occurred within a single perceptual-decision task. Not only do our results challenge previous explanations for context effects proposed by consumer researchers, but they also challenge the choice rules assumed in theories of perceptual decision making.
Publisher: Society for Neuroscience
Date: 06-06-2012
Publisher: Wiley
Date: 18-01-2012
DOI: 10.1111/J.1551-6709.2011.01221.X
Abstract: For decisions between many alternatives, the benchmark result is Hick's Law: that response time increases log-linearly with the number of choice alternatives. Even when Hick's Law is observed for response times, ergent results have been observed for error rates-sometimes error rates increase with the number of choice alternatives, and sometimes they are constant. We provide evidence from two experiments that error rates are mostly independent of the number of choice alternatives, unless context effects induce participants to trade speed for accuracy across conditions. Error rate data have previously been used to discriminate between competing theoretical accounts of Hick's Law, and our results question the validity of those conclusions. We show that a previously dismissed optimal observer model might provide a parsimonious account of both response time and error rate data. The model suggests that people approximate Bayesian inference in multi-alternative choice, except for some perceptual limitations.
Publisher: Elsevier BV
Date: 04-2011
Publisher: Elsevier BV
Date: 10-2010
Publisher: Elsevier BV
Date: 09-2021
Publisher: Center for Open Science
Date: 25-06-2018
Abstract: How does the process of information transmission affect the cultural or linguistic products that emerge? This question is often studied experimentally and computationally via iterated learning: a procedure in which participants learn from previous participants in a chain. Iterated learning is a powerful tool because, when all participants share the same priors, the stationary distributions of the iterated learning chains reveal those priors. In many situations, however, it is unreasonable to assume that all participants share the same prior beliefs. We present four simulation studies and one experiment demonstrating that when the population of learners is heterogeneous, the behavior of an iterated learning chain can be unpredictable, and is often systematically distorted by the learners with the most extreme biases. This results in group-level outcomes that reflect neither the behavior of any in iduals within the population nor the overall population average. We discuss implications for the use of iterated learning as a methodological tool as well as for the processes that might have shaped cultural and linguistic evolution in the real world.
Publisher: Elsevier BV
Date: 02-2009
DOI: 10.1016/J.COGPSYCH.2008.09.002
Abstract: When required to predict sequential events, such as random coin tosses or basketball free throws, people reliably use inappropriate strategies, such as inferring temporal structure when none is present. We investigate the ability of observers to predict sequential events in dynamically changing environments, where there is an opportunity to detect true temporal structure. In two experiments we demonstrate that participants often make correct statistical decisions when asked to infer the hidden state of the data generating process. However, when asked to make predictions about future outcomes, accuracy decreased even though normatively correct responses in the two tasks were identical. A particle filter model accounts for all data, describing performance in terms of a plausible psychological process. By varying the number of particles, and the prior belief about the probability of a change occurring in the data generating process, we were able to model most of the observed in idual differences.
Publisher: Springer Science and Business Media LLC
Date: 11-03-2016
Publisher: Center for Open Science
Date: 26-07-2022
Abstract: Joint modelling of behaviour and neural activation poses the potential to provide significant advances in linking brain and behaviour. However, methods of joint modelling have been limited by difficulties in estimation, often due to high dimensionality and simultaneous estimation challenges. In the current article, we propose a method of model estimation which draws on current state-of-the-art Bayesian hierarchical modelling techniques and uses factor analysis as a means of dimensionality reduction to provide further information on which to make inference. The method uses a particle metropolis within Gibbs s ler (PMwG Gunawan, Hawkins, Tran, Kohn, & Brown, 2020), where the factor structure is estimated within the Gibbs step for the group level. We show the significant dimensionality reduction gained by factor analysis in the Gibbs step of the PMwG, evidence for parameter recovery, a variety of factor loading constraints which can be used for different purposes and research questions, as well as two applications of the method to previously analysed data. This method represents a flexible and usable approach with interpretable outcomes, which relies on data driven analysis as opposed to hypothesis driven methods often used in joint modelling. Although we focus on joint modelling methods, this model based estimation approach could be used for any high dimensional modelling problem. We provide open source code and accompanying tutorial documentation to make the method accessible to any researchers.
Publisher: Elsevier BV
Date: 05-2015
DOI: 10.1016/J.CORTEX.2014.11.019
Abstract: A recent 'crisis of confidence' has emerged in the empirical sciences. Several studies have suggested that questionable research practices (QRPs) such as optional stopping and selective publication may be relatively widespread. These QRPs can result in a high proportion of false-positive findings, decreasing the reliability and replicability of research output. A potential solution is to register experiments prior to data acquisition and analysis. In this study we attempted to replicate studies that relate brain structure to behavior and cognition. These structural brain-behavior (SBB) correlations occasionally receive much attention in science and in the media. Given the impact of these studies, it is important to investigate their replicability. Here, we attempt to replicate five SBB correlation studies comprising a total of 17 effects. To prevent the impact of QRPs we employed a preregistered, purely confirmatory replication approach. For all but one of the 17 findings under scrutiny, confirmatory Bayesian hypothesis tests indicated evidence in favor of the null hypothesis ranging from anecdotal (Bayes factor 10). In several studies, effect size estimates were substantially lower than in the original studies. To our knowledge, this is the first multi-study confirmatory replication of SBB correlations. With this study, we hope to encourage other researchers to undertake similar replication attempts.
Publisher: Center for Open Science
Date: 09-08-2022
Abstract: Discrete choice (DCE) and rating scale experiments (RSE) are commonly applied procedures for eliciting preference judgments in a plethora of applied settings such as consumer choices, health care, and transport economics. An almost universal assumption is that actual "ground truth" preferences do not depend on which elicitation procedure is used. It is usually not possible to test this assumption, because typical studies feature response options for which there is no objectively correct response. To make progress on testing this assumption, we conducted a perceptual discrimination experiment where response options varied on a single attribute -- stimulus saturation level -- with a known objectively correct response. We had the same participants complete both a choice task (CT) and rating scale (RS) version of the experiment, allowing a direct examination of the assumption of a common representation. Our CT featured many characteristics that define a DCE, however, in order to have a known objectively correct response, it also differed in a few important ways. To test the assumption of a common representation, we developed a cognitive model with a response mechanism for both CT and RS. This enabled us to compare a model version that featured one shared latent stimulus representation across CT and RS versus a version which featured separate representations. Our results support the assumption that a single internal state supports both CT and RS responses, and also suggest that the CT method might provide more sensitive measurement of internal states than the RS method.
Publisher: Springer Science and Business Media LLC
Date: 12-2010
DOI: 10.3758/PBR.17.6.763
Publisher: Springer Science and Business Media LLC
Date: 05-06-2017
DOI: 10.3758/S13421-017-0718-Z
Abstract: Constant decision-making underpins much of daily life, from simple perceptual decisions about navigation through to more complex decisions about important life events. At many scales, a fundamental task of the decision-maker is to balance competing needs for caution and urgency: fast decisions can be more efficient, but also more often wrong. We show how a single mathematical framework for decision-making explains the urgency/caution balance across decision-making at two very different scales. This explanation has been applied at the level of neuronal circuits (on a time scale of hundreds of milliseconds) through to the level of stable personality traits (time scale of years).
Publisher: Springer Science and Business Media LLC
Date: 02-2003
DOI: 10.3758/BF03195493
Abstract: We examine recent concerns that averaged learning curves can present a distorted picture of in idual learning. Analyses of practice curve data from a range of paradigms demonstrate that such concerns are well founded for fits of power and exponential functions when the arithmetic average is computed over participants. We also demonstrate that geometric averaging over participants does not, in general, avoid distortion. By contrast, we show that block averages of in idual curves and similar smoothing techniques cause little or no distortion of functional form, while still providing the noise reduction benefits that motivate the use of averages. Our analyses are concerned mainly with the effects of averaging on the fit of exponential and power functions, but we also define general conditions that must be met by any set of functions to avoid distortion from averaging.
Publisher: Center for Open Science
Date: 30-08-2022
Abstract: Collaboration in shared environments requires human agents to coordinate their behaviour according to the machines’ actions. In this study, we compared the performance and behaviour of Human-Machine (HM) and Human-Human (HH) teams. While HH teaming behaviour is sensitive to Collaborative contexts, little is known about HM teaming behaviour. Furthermore, teaming behaviour may impact the team’s Joint Capacity – the team’s ability to handle teamwork processes and task demands. To assess teaming behaviour at every moment of a trial we used three distinct spatiotemporal measures (Momentary Distance, Highly Correlated Segments, and Running Correlation). To assess the team’s joint performance, we adopted the Capacity Coefficient (Townsend & Nozawa,1995). For both HH and HM teams, behavioural measures predicted Joint Capacity. HH teams demonstrated greater performance and less synchronous behaviour than HM teams. The reduced synchrony of HH teams likely improved their performance as they could complement each other’s behaviour ratherthan duplicate inefficiencies
Publisher: American Psychological Association (APA)
Date: 11-2013
DOI: 10.1037/A0030543
Abstract: The cognitive concept of response inhibition can be measured with the stop-signal paradigm. In this paradigm, participants perform a 2-choice response time (RT) task where, on some of the trials, the primary task is interrupted by a stop signal that prompts participants to withhold their response. The dependent variable of interest is the latency of the unobservable stop response (stop-signal reaction time, or SSRT). Based on the horse race model (Logan & Cowan, 1984), several methods have been developed to estimate SSRTs. None of these approaches allow for the accurate estimation of the entire distribution of SSRTs. Here we introduce a Bayesian parametric approach that addresses this limitation. Our method is based on the assumptions of the horse race model and rests on the concept of censored distributions. We treat response inhibition as a censoring mechanism, where the distribution of RTs on the primary task (go RTs) is censored by the distribution of SSRTs. The method assumes that go RTs and SSRTs are ex-Gaussian distributed and uses Markov chain Monte Carlo s ling to obtain posterior distributions for the model parameters. The method can be applied to in idual as well as hierarchical data structures. We present the results of a number of parameter recovery and robustness studies and apply our approach to published data from a stop-signal experiment.
Publisher: Cambridge University Press (CUP)
Date: 08-2000
DOI: 10.1017/S0140525X00353353
Abstract: An extensive survey by Heathcote et al. (in press) found that the Law of Practice is closer to an exponential than a power form. We show that this result is hard to obtain for models using leaky competitive units when practice affects only the input, but that it can be accommodated when practice affects shunting self-excitation.
Publisher: American Psychological Association (APA)
Date: 2014
DOI: 10.1037/A0036801
Abstract: Decision-makers effortlessly balance the need for urgency against the need for caution. Theoretical and neurophysiological accounts have explained this tradeoff solely in terms of the quantity of evidence required to trigger a decision (the "threshold"). This explanation has also been used as a benchmark test for evaluating new models of decision making, but the explanation itself has not been carefully tested against data. We rigorously test the assumption that emphasizing decision speed versus decision accuracy selectively influences only decision thresholds. In data from a new brightness discrimination experiment we found that emphasizing decision speed over decision accuracy not only decreases the amount of evidence required for a decision but also decreases the quality of information being accumulated during the decision process. This result was consistent for 2 leading decision-making models and in a model-free test. We also found the same model-based results in archival data from a lexical decision task (reported by Wagenmakers, Ratcliff, Gomez, & McKoon, 2008) and new data from a recognition memory task. We discuss discuss implications for theoretical development and applications.
Publisher: Springer Science and Business Media LLC
Date: 06-2003
DOI: 10.3758/BF03196105
Abstract: Myung, Kim, and Pitt (2000) demonstrated that simple power functions almost always provide a better fit to purely random data than do simple exponential functions. This result has important implications, because it suggests that high noise levels, which are common in psychological experiments, may cause a bias favoring power functions. We replicate their result and extend it by showing strong bias for more realistic s le sizes. We also show that biases occur for data that contain both random and systematic components, as may be expected in real data. We then demonstrate that these biases disappear for two- or three-parameter functions that include linear parameters (in at least one parameterization). Our results suggest that one should exercise caution when proposing simple power and exponential functions as models of learning. More generally, our results suggest that linear parameters should be estimated rather than fixed when one is comparing the fit of nonlinear models to noisy data.
Publisher: American Psychological Association (APA)
Date: 2011
DOI: 10.1037/A0025191
Abstract: Signal detection theory forms the core of many current models of cognition, including memory, choice, and categorization. However, the classic signal detection model presumes the a priori existence of fixed stimulus representations--usually Gaussian distributions--even when the observer has no experience with the task. Furthermore, the classic signal detection model requires the observer to place a response criterion along the axis of stimulus strength, and without theoretical elaboration, this criterion is fixed and independent of the observer's experience. We present a dynamic, adaptive model that addresses these 2 long-standing issues. Our model describes how the stimulus representation can develop from a rough subjective prior and thereby explains changes in signal detection performance over time. The model structure also provides a basis for the signal detection decision that does not require the placement of a criterion along the axis of stimulus strength. We present simulations of the model to examine its behavior and several experiments that provide data to test the model. We also fit the model to recognition memory data and discuss the role that feedback plays in establishing stimulus representations.
Publisher: American Psychological Association (APA)
Date: 21-04-2022
DOI: 10.1037/MET0000458
Abstract: Model comparison is the cornerstone of theoretical progress in psychological research. Common practice overwhelmingly relies on tools that evaluate competing models by balancing in-s le descriptive adequacy against model flexibility, with modern approaches advocating the use of marginal likelihood for hierarchical cognitive models. Cross-validation is another popular approach but its implementation remains out of reach for cognitive models evaluated in a Bayesian hierarchical framework, with the major hurdle being its prohibitive computational cost. To address this issue, we develop novel algorithms that make variational Bayes (VB) inference for hierarchical models feasible and computationally efficient for complex cognitive models of substantive theoretical interest. It is well known that VB produces good estimates of the first moments of the parameters, which gives good predictive densities estimates. We thus develop a novel VB algorithm with Bayesian prediction as a tool to perform model comparison by cross-validation, which we refer to as CVVB. In particular, CVVB can be used as a model screening device that quickly identifies bad models. We demonstrate the utility of CVVB by revisiting a classic question in decision making research: what latent components of processing drive the ubiquitous speed-accuracy tradeoff? We demonstrate that CVVB strongly agrees with model comparison via marginal likelihood, yet achieves the outcome in much less time. Our approach brings cross-validation within reach of theoretically important psychological models, making it feasible to compare much larger families of hierarchically specified cognitive models than has previously been possible. To enhance the applicability of the algorithm, we provide Matlab code together with a user manual so users can easily implement VB and/or CVVB for the models considered in this article and their variants. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
Publisher: Elsevier BV
Date: 06-2011
DOI: 10.1016/J.COGNITION.2011.02.002
Abstract: Research in the field of mental chronometry and in idual differences has revealed several robust regularities (Jensen, 2006). These include right-skewed response time (RT) distributions, the worst performance rule, correlations with general intelligence (g) that are more pronounced for RT standard deviations (RTSD) than they are for RT means (RTm), an almost perfect linear relation between in idual differences in RTSD and RTm, linear Brinley plots, and stronger correlations between g and inspection time (IT) than between g and RTm. Here we show how all these regularities are manifestations of a single underlying relationship, when viewed through the lens of Ratcliff's diffusion model (Ratcliff, 1978 Ratcliff, Schmiedek, & McKoon, 2008). The single underlying relationship is between in idual differences in general intelligence and in idual differences in "drift rate", which is just the speed of information processing in Ratcliff's model. We also test and confirm a strong prediction of the diffusion model, namely that the worst performance rule generalizes to phenomena outside of the field of intelligence. Our approach provides an integrative perspective on intelligence findings.
Publisher: Society for Neuroscience
Date: 11-02-2015
DOI: 10.1523/JNEUROSCI.2410-14.2015
Abstract: For nearly 50 years, the dominant account of decision-making holds that noisy information is accumulated until a fixed threshold is crossed. This account has been tested extensively against behavioral and neurophysiological data for decisions about consumer goods, perceptual stimuli, eyewitness testimony, memories, and dozens of other paradigms, with no systematic misfit between model and data. Recently, the standard model has been challenged by alternative accounts that assume that less evidence is required to trigger a decision as time passes. Such “collapsing boundaries” or “urgency signals” have gained popularity in some theoretical accounts of neurophysiology. Nevertheless, evidence in favor of these models is mixed, with support coming from only a narrow range of decision paradigms compared with a long history of support from dozens of paradigms for the standard theory. We conducted the first large-scale analysis of data from humans and nonhuman primates across three distinct paradigms using powerful model-selection methods to compare evidence for fixed versus collapsing bounds. Overall, we identified evidence in favor of the standard model with fixed decision boundaries. We further found that evidence for static or dynamic response boundaries may depend on specific paradigms or procedures, such as the extent of task practice. We conclude that the difficulty of selecting between collapsing and fixed bounds models has received insufficient attention in previous research, calling into question some previous results.
Publisher: Public Library of Science (PLoS)
Date: 03-07-2014
Publisher: Hogrefe Publishing Group
Date: 05-2012
DOI: 10.1027/1618-3169/A000145
Abstract: The context in which a decision occurs can influence the decision-making process in many ways. In the laboratory, this is often evident in the effects of recent decisions. For instance, many experiments combine easy and difficult decisions, such as when word frequency is manipulated in lexical decision. The “blocking effect” describes how such decisions differ depending on whether the conditions are presented in pure blocks (comprised purely of easy or hard stimuli) or mixed blocks (also known as a “mixing cost”). We present a novel extension to these context effects, demonstrating in two experiments that they can be induced using conditions with identical difficulty, but different timing properties. This suggests that explanations of context effects based on task difficulty or error monitoring alone might be insufficient, and suggest a role for decision time. In prior work, we suggested such a hypothesis under the assumption that observers minimize their decision time, subject to an accuracy constraint. Consistent with this explanation, we find that decisions in slower conditions were based on less evidence when they were experienced in mixed compared to pure blocks.
Publisher: Springer Science and Business Media LLC
Date: 31-05-2016
DOI: 10.3758/S13423-016-1056-Z
Abstract: Theory development in both psychology and neuroscience can benefit by consideration of both behavioral and neural data sets. However, the development of appropriate methods for linking these data sets is a difficult statistical and conceptual problem. Over the past decades, different linking approaches have been employed in the study of perceptual decision-making, beginning with rudimentary linking of the data sets at a qualitative, structural level, culminating in sophisticated statistical approaches with quantitative links. We outline a new approach, in which a single model is developed that jointly addresses neural and behavioral data. This approach allows for specification and testing of quantitative links between neural and behavioral aspects of the model. Estimating the model in a Bayesian framework allows both data sets to equally inform the estimation of all model parameters. The use of a hierarchical model architecture allows for a model, which accounts for and measures the variability between neurons. We demonstrate the approach by re-analysis of a classic data set containing behavioral recordings of decision-making with accompanying single-cell neural recordings. The joint model is able to capture most aspects of both data sets, and also supports the analysis of interesting questions about prediction, including predicting the times at which responses are made, and the corresponding neural firing rates.
Publisher: American Psychological Association (APA)
Date: 07-2018
DOI: 10.1037/ABN0000357
Publisher: Springer Science and Business Media LLC
Date: 06-2002
DOI: 10.3758/BF03196299
Abstract: We introduce and evaluate via a Monte Carlo study a robust new estimation technique that fits distribution functions to grouped response time (RT) data, where the grouping is determined by s le quantiles. The new estimator, quantile maximum likelihood (QML), is more efficient and less biased than the best alternative estimation technique when fitting the commonly used ex-Gaussian distribution. Limitations of the Monte Carlo results are discussed and guidance provided for the practical application of the new technique. Because QML estimation can be computationally costly, we make fast open source code for fitting available that can be easily modified to use QML in the estimation of any distribution function.
Publisher: Springer Science and Business Media LLC
Date: 25-08-2017
DOI: 10.3758/S13423-016-1135-1
Abstract: Organisms making repeated simple decisions are faced with a tradeoff between urgent and cautious strategies. While animals can adopt a statistically optimal policy for this tradeoff, findings about human decision-makers have been mixed. Some studies have shown that people can optimize this "speed-accuracy tradeoff", while others have identified a systematic bias towards excessive caution. These issues have driven theoretical development and spurred debate about the nature of human decision-making. We investigated a potential resolution to the debate, based on two factors that routinely differ between human and animal studies of decision-making: the effects of practice, and of longer-term feedback. Our study replicated the finding that most people, by default, are overly cautious. When given both practice and detailed feedback, people moved rapidly towards the optimal policy, with many participants reaching optimality with less than 1 h of practice. Our findings have theoretical implications for cognitive and neural models of simple decision-making, as well as methodological implications.
Publisher: Springer Science and Business Media LLC
Date: 18-12-2021
DOI: 10.1186/S13750-021-00253-9
Abstract: Mammals, globally, are facing population declines. Strategies increasingly employed to recover threatened mammal populations include protecting populations inside predator-free havens, and translocating animals from one site to another, or from a captive breeding program. These approaches can expose predator-naïve animals to predators they have never encountered and as a result, many conservation projects have failed due to the predation of in iduals that lacked appropriate anti-predator responses. Hence robust ways to measure anti-predator responses are urgently needed to help identify naïve populations at risk, to select appropriate animals for translocation, and to monitor managed populations for trait change. Here, we outline a protocol for a systematic review that collates existing behavioural assays developed for the purpose of quantifying anti-predator responses, and identifies assay types and predator cues that provoke the greatest behavioural responses. We will retrieve articles from academic bibliographic databases and grey literature sources (such as government and conservation management reports), using a Boolean search string. Each article will be screened for the satisfaction of eligibility criteria determined using the PICO (Population—Intervention—Comparator—Outcome) framework, to yield the final article pool. Using metadata extracted from each article, we will map all known behavioural assays for quantifying anti-predator responses in mammals and will then examine the context in which each assay has been implemented (e.g. species tested, predator cue characteristics). Finally, with mixed effects modelling, we will determine which of these assays and predator cue types elicit the greatest behavioural responses (standardised difference in response between treatment and control groups). The final review will highlight the most robust methodology, will reveal promising techniques on which to focus future assay development, and will collate relevant information for conservation managers.
Publisher: Center for Open Science
Date: 27-08-2019
Abstract: With the advancement of technologies like in-car navigation and smartphones, concerns around how cognitive functioning is influenced by ``workload'' are increasingly prevalent. Research shows that spreading effort across multiple tasks can impair cognitive abilities through an overuse of resources, and that similar overload effects arise in difficult single-task paradigms. We developed a novel lab-based extension of the Detection Response Task, which measures workload, and paired it with a Multiple Object Tracking Task to manipulate cognitive load. Load was manipulated either by changing within-task difficulty or by the addition of an extra task. Using quantitative cognitive modelling we showed that these manipulations cause similar cognitive impairments through diminished processing rates, but that the introduction of a second task tends to invoke more cautious response strategies that do not occur when only difficulty changes. We conclude that more prudence should be exercised when directly comparing multitasking and difficulty-based workload impairments, particularly when relying on measures of central tendency.
Publisher: Elsevier BV
Date: 06-2020
Publisher: Springer Science and Business Media LLC
Date: 29-07-2019
Publisher: Springer Science and Business Media LLC
Date: 11-2003
DOI: 10.3758/BF03195527
Abstract: Quantile maximum likelihood (QML) is an estimation technique, proposed by Heathcote, Brown, and Mewhort (2002), that provides robust and efficient estimates of distribution parameters, typically for response time data, in s le sizes as small as 40 observations. In view of the computational difficulty inherent in implementing QML, we provide open-source Fortran 90 code that calculates QML estimates for parameters of the ex-Gaussian distribution, as well as standard maximum likelihood estimates. We show that parameter estimates from QML are asymptotically unbiased and normally distributed. Our software provides asymptotically correct standard error and parameter intercorrelation estimates, as well as producing the outputs required for constructing quantile-quantile plots. The code is parallelizable and can easily be modified to estimate parameters from other distributions. Compiled binaries, as well as the source code, ex le analysis files, and a detailed manual, are available for free on the Internet.
Publisher: Springer Science and Business Media LLC
Date: 06-2004
DOI: 10.3758/BF03196613
Abstract: Heathcote, Brown, and Mewhort (2002) have introduced a new, robust method of estimating response time distributions. Their method may have practical advantages over conventional maximum likelihood estimation. The basic idea is that the likelihood of parameters is maximized given a few quantiles from the data. We show that Heathcote et al.'s likelihood function is not correct and provide the appropriate correction. However, although our correction stands on firmer theoretical ground than Heathcote et al.'s, it appears to yield worse parameter estimates. This result further indicates that, at least for some distributions and situations, quantile maximum likelihood estimation may have better nonasymptotic properties than a more theoretically justified approach.
Publisher: Proceedings of the National Academy of Sciences
Date: 19-08-2019
Abstract: An important feature of human cognition is the ability to flexibly and efficiently adapt behavior in response to continuously changing contextual demands. We leverage a large-scale dataset from Lumosity, an online cognitive-training platform, to investigate how cognitive processes involved in cued switching between tasks are affected by level of task practice across the adult lifespan. We develop a computational account of task switching that specifies the temporal dynamics of activating task-relevant representations and inhibiting task-irrelevant representations and how they vary with extended task practice across a number of age groups. Practice modulates the level of activation of the task-relevant representation and improves the rate at which this information becomes available, but has little effect on the task-irrelevant representation. While long-term practice improves performance across all age groups, it has a greater effect on older adults. Indeed, extensive task practice can make older in iduals functionally similar to less-practiced younger in iduals, especially for cognitive measures that focus on the rate at which task-relevant information becomes available.
Publisher: Springer Science and Business Media LLC
Date: 28-04-2017
DOI: 10.3758/S13428-017-0887-5
Abstract: Evidence accumulation models of decision-making have led to advances in several different areas of psychology. These models provide a way to integrate response time and accuracy data, and to describe performance in terms of latent cognitive processes. Testing important psychological hypotheses using cognitive models requires a method to make inferences about different versions of the models which assume different parameters to cause observed effects. The task of model-based inference using noisy data is difficult, and has proven especially problematic with current model selection methods based on parameter estimation. We provide a method for computing Bayes factors through Monte-Carlo integration for the linear ballistic accumulator (LBA Brown and Heathcote, 2008), a widely used evidence accumulation model. Bayes factors are used frequently for inference with simpler statistical models, and they do not require parameter estimation. In order to overcome the computational burden of estimating Bayes factors via brute force integration, we exploit general purpose graphical processing units we provide free code for this. This approach allows estimation of Bayes factors via Monte-Carlo integration within a practical time frame. We demonstrate the method using both simulated and real data. We investigate the stability of the Monte-Carlo approximation, and the LBA's inferential properties, in simulation studies.
Publisher: American Psychological Association (APA)
Date: 2005
DOI: 10.1037/0096-1523.31.2.289
Abstract: Most models of choice response time base decisions on evidence accumulated over time. A fundamental distinction among these models concerns whether each piece of evidence is equally weighted (lossless accumulation) or unequally weighted (leaky accumulation). The authors tested a hypothesis derived from A. Heathcote and S. Brown's (2002) self-exciting expert competitor (SEEXC) model of skill acquisition: that evidence accumulation becomes less leaky with practice. The hypothesis was supported by observation that the effects of prime stimuli increased with practice. The authors used metacontrast masked primes, which could not be consciously discriminated by most participants, to avoid methodological problems associated with conscious strategy changes. The form of the law of practice in the data is also shown to be consistent with the SEEXC model.
Publisher: American Psychological Association (APA)
Date: 04-2014
DOI: 10.1037/A0036137
Abstract: Context effects occur when a choice between 2 options is altered by adding a 3rd alternative. Three major context effects--similarity, compromise, and attraction--have wide-ranging implications across applied and theoretical domains, and have driven the development of new dynamic models of multiattribute and multialternative choice. We propose the multiattribute linear ballistic accumulator (MLBA), a new dynamic model that provides a quantitative account of all 3 context effects. Our account applies not only to traditional paradigms involving choices among hedonic stimuli, but also to recent demonstrations of context effects with nonhedonic stimuli. Because of its computational tractability, the MLBA model is more easily applied than previous dynamic models. We show that the model also accounts for a range of other phenomena in multiattribute, multialternative choice, including time pressure effects, and that it makes a new prediction about the relationship between deliberation time and the magnitude of the similarity effect, which we confirm experimentally.
Publisher: Elsevier BV
Date: 12-2009
Publisher: American Psychological Association (APA)
Date: 2012
DOI: 10.1037/A0025809
Abstract: State-trace analysis (Bamber, 1979) addresses a question of interest in many areas of psychological research: Does 1 or more than 1 latent (i.e., not directly observed) variable mediate an interaction between 2 experimental manipulations? There is little guidance available on how to design an experiment suited to state-trace analysis, despite its increasing use, and existing statistical methods for state-trace analysis are problematic. We provide a framework for designing and refining a state-trace experiment and statistical procedures for the analysis of accuracy data using Klugkist, Kato, and Hoijtink's (2005) method of estimating Bayes factors. The statistical procedures provide estimates of the evidence favoring 1 versus more than 1 latent variable, as well as evidence that can be used to refine experimental methodology.
Publisher: Springer Science and Business Media LLC
Date: 02-2009
DOI: 10.3758/BRM.41.1.154
Publisher: Center for Open Science
Date: 09-2022
Abstract: In the modern world, there are important tasks that have become too complex for a single unaided in idual to manage. Some safety-critical tasks are conducted by teams to improve task performance and minimize risk of error. These teams have traditionally consisted of human operators, yet nowadays AI and machine systems are incorporated into team environments to improve performance and capacity. We used a computerized task, modeled after a classic arcade game, to investigate the performance of human-machine and human-human teams. We manipulated the group conditions between team members sometimes they were incentivised to collaborate, sometimes compete, and sometimes to work separately. We evaluated players’ performance in the main task (game play) and also measured the cognitive workload they experienced. We compared workload and game performance between different team types (human-human vs. human-machine) and different group conditions (competitive, collaborate, independent). Adapting workload capacity analysis to human-machine teams, we found performance under both team types and all group conditions suffered a performance efficiency cost. However, we observed a reduced cost in collaborative over competitive teams within human-human pairings but this effect was diminished when playing with a machine partner. The implications of workload capacity analysis as a powerful tool for human-machine team performance measurement are discussed.
Publisher: Wiley
Date: 05-06-2018
DOI: 10.1111/COGS.12627
Abstract: Understanding in idual differences in cognitive performance is an important part of understanding how variations in underlying cognitive processes can result in variations in task performance. However, the exploration of in idual differences in the components of the decision process-such as cognitive processing speed, response caution, and motor execution speed-in previous research has been limited. Here, we assess the heritability of the components of the decision process, with heritability having been a common aspect of in idual differences research within other areas of cognition. Importantly, a limitation of previous work on cognitive heritability is the underlying assumption that variability in response times solely reflects variability in the speed of cognitive processing. This assumption has been problematic in other domains, due to the confounding effects of caution and motor execution speed on observed response times. We extend a cognitive model of decision-making to account for relatedness structure in a twin study paradigm. This approach can separately quantify different contributions to the heritability of response time. Using data from the Human Connectome Project, we find strong evidence for the heritability of response caution, and more ambiguous evidence for the heritability of cognitive processing speed and motor execution speed. Our study suggests that the assumption made in previous studies-that the heritability of cognitive ability is based on cognitive processing speed-may be incorrect. More generally, our methodology provides a useful avenue for future research in complex data that aims to analyze cognitive traits across different sources of related data, whether the relation is between people, tasks, experimental phases, or methods of measurement.
Publisher: Springer Science and Business Media LLC
Date: 16-11-2010
Publisher: Springer Science and Business Media LLC
Date: 12-2009
Publisher: Society for Neuroscience
Date: 08-07-2009
Publisher: Springer Science and Business Media LLC
Date: 28-01-2012
Publisher: Center for Open Science
Date: 06-11-2019
Abstract: The accurate and objective measurement of cognitive workload is important in many aspects of psychological research. The Detection Response Task (DRT) is a well-validated method for measuring cognitive workload that has been used extensively in applied tasks, for ex le to investigate the effects of fatigue and phone usage on driving. Given its success in applied tasks, we investigated whether the DRT could be used to measure cognitive workload in cognitive tasks more commonly used in experimental cognitive psychology, and whether this application could be extended to online environments. We had participants perform a multiple object tracking task while simultaneously performing a DRT. We manipulated the cognitive load of the multiple object tracking task by changing the number of dots to be tracked. Measurements from the DRT were sensitive to changes in the cognitive load, establishing the efficacy of the DRT for experimental cognitive tasks in lab-based situations. This sensitivity continued when applied to an online environment (with our code for the online DRT implementation being freely available at \\url{osf.io/dc39s/}), though to a reduced extent compared to the in-lab situation, opening up the potential use of the DRT to a much greater range of tasks and situations, but suggesting that in-lab applications are best when possible.
Publisher: Proceedings of the National Academy of Sciences
Date: 11-11-2008
Abstract: Human decision-making almost always takes place under time pressure. When people are engaged in activities such as shopping, driving, or playing chess, they have to continually balance the demands for fast decisions against the demands for accurate decisions. In the cognitive sciences, this balance is thought to be modulated by a response threshold, the neural substrate of which is currently subject to speculation. In a speed decision-making experiment, we presented participants with cues that indicated different requirements for response speed. Application of a mathematical model for the behavioral data confirmed that cueing for speed lowered the response threshold. Functional neuroimaging showed that cueing for speed activates the striatum and the pre-supplementary motor area (pre-SMA), brain structures that are part of a closed-loop motor circuit involved in the preparation of voluntary action plans. Moreover, activation in the striatum is known to release the motor system from global inhibition, thereby facilitating faster but possibly premature actions. Finally, the data show that in idual variation in the activation of striatum and pre-SMA is selectively associated with in idual variation in the litude of the adjustments in the response threshold estimated by the mathematical model. These results demonstrate that when people have to make decisions under time pressure their striatum and pre-SMA show increased levels of activation.
Publisher: American Physiological Society
Date: 07-2015
Abstract: The dominant theoretical paradigm in explaining decision making throughout both neuroscience and cognitive science is known as “evidence accumulation”—the core idea being that decisions are reached by a gradual accumulation of noisy information. Although this notion has been supported by hundreds of experiments over decades of study, a recent theory proposes that the fundamental assumption of evidence accumulation requires revision. The “urgency gating” model assumes decisions are made without accumulating evidence, using only moment-by-moment information. Under this assumption, the successful history of evidence accumulation models is explained by asserting that the two models are mathematically identical in standard experimental procedures. We demonstrate that this proof of equivalence is incorrect, and that the models are not identical, even when both models are augmented with realistic extra assumptions. We also demonstrate that the two models can be perfectly distinguished in realistic simulated experimental designs, and in two real data sets the evidence accumulation model provided the best account for one data set, and the urgency gating model for the other. A positive outcome is that the opposing modeling approaches can be fruitfully investigated without wholesale change to the standard experimental paradigms. We conclude that future research must establish whether the urgency gating model enjoys the same empirical support in the standard experimental paradigms that evidence accumulation models have gathered over decades of study.
Publisher: Center for Open Science
Date: 21-07-2023
Abstract: Recent years have seen remarkable advances in the development and use of Artificial Intelligence (AI) in image classification, driving cars, and writing scientific articles. Although AI can outperform humans in many tasks, there remain domains where humans and AI working together can outperform either working alone. For humans and AI to work together effectively, the human must trust the AI bot to the right degree (calibrated). If the human does not trust the bot sufficiently, or conversely trusts the bot more than is warranted, the human-bot team will not perform as well as they could. We report three experiments examining trust in human-AI teaming. While existing studies typically collect binary responses (to trust, or not to trust), we present a novel paradigm that quantifies trust in a bot-recommendation in a continuous fashion. These data allow better precision, and in the future the development of more refined models of human-bot trust.
Publisher: Center for Open Science
Date: 13-12-2019
Abstract: Objective: To test the effects of enhanced display information ("symbology") on cognitive workload in a simulated helicopter environment, using the Detection Response Task (DRT). Background: Workload in highly demanding environments can be influenced by the amount of information given to the operator and consequently it is important to limit potential overload. Methods: Participants (highly trained military pilots) completed simulated helicopter flights, which varied visual conditions and the amount of information given. During these flights participants also completed a DRT as a measure of cognitive workload. Results: With more visual information available, pilots landing accuracy was improved across environmental conditions. The DRT is sensitive to changes in cognitive workload, with workload differences shown between environmental conditions. Increasing symbology appeared to have a minor effect on workload, with an interaction effect of symbology and environmental condition showing that symbology appeared to moderate workload. Conclusion: The DRT is a useful workload measure in simulated helicopter settings. The level of symbology moderated pilot workload. The increased level of symbology appeared to assist pilots flight behaviour and landing ability. Results indicate that increased symbology has benefits in more difficult scenarios. \textbf{Applications:} The detection response task is an easily implemented and effective measure of cognitive workload in a variety of settings. In the current experiment, the DRT captures the increased workload induced by varying the environmental conditions, and provides evidence for the use of increased symbology to assist pilots.
Publisher: American Psychological Association (APA)
Date: 04-2017
DOI: 10.1037/REV0000057
Abstract: Recently, Veksler, Myers, and Gluck (2015) proposed model flexibility analysis as a method that "aids model evaluation by providing a metric for gauging the persuasiveness of a given fit" (p. 755) Model flexibility analysis measures the complexity of a model in terms of the proportion of all possible data patterns it can predict. We show that this measure does not provide a reliable way to gauge complexity, which prevents model flexibility analysis from fulfilling either of the 2 aims outlined by Veksler et al. (2015): absolute and relative model evaluation. We also show that model flexibility analysis can even fail to correctly quantify complexity in the most clear cut case, with nested models. We advocate for the use of well-established techniques with these characteristics, such as Bayes factors, normalized maximum likelihood, or cross-validation, and against the use of model flexibility analysis. In the discussion, we explore 2 issues relevant to the area of model evaluation: the completeness of current model selection methods and the philosophical debate of absolute versus relative model evaluation. (PsycINFO Database Record
Publisher: American Psychological Association (APA)
Date: 2007
Publisher: Informa UK Limited
Date: 03-2012
Publisher: Wiley
Date: 16-08-2010
DOI: 10.1111/J.1469-8986.2010.01115.X
Abstract: We examined whether the cue-locked centroparietal positivity is associated with switch-specific or general preparation processes. If this positivity (300-400 ms) indexes switch-specific preparation, faster switch trials associated with smaller RT switch cost should have a larger positivity as compared to slower switch trials, but no such association should be evident for repeat trials. We extracted ERP waveforms corresponding to semi-deciles of each participant's RT distribution (i.e., fastest to slowest 5% of trials) for switch and repeat conditions. Consistent with a switch-specific preparation process, centroparietal positivity litude was linked to slower RT and larger RT switch cost for switch but not repeat trials. A later pre-target negativity (500-600 ms) was inversely correlated with RT for both switch and repeat trials, consistent with a general anticipatory preparation processes.
Publisher: Society for Neuroscience
Date: 23-11-2011
DOI: 10.1523/JNEUROSCI.0309-11.2011
Abstract: Even in the simplest laboratory tasks older adults generally take more time to respond than young adults. One of the reasons for this age-related slowing is that older adults are reluctant to commit errors, a cautious attitude that prompts them to accumulate more information before making a decision (Rabbitt, 1979). This suggests that age-related slowing may be partly due to unwillingness on behalf of elderly participants to adopt a fast-but-careless setting when asked. We investigate the neuroanatomical and neurocognitive basis of age-related slowing in a perceptual decision-making task where cues instructed young and old participants to respond either quickly or accurately. Mathematical modeling of the behavioral data confirmed that cueing for speed encouraged participants to set low response thresholds, but this was more evident in younger than older participants. Diffusion weighted structural images suggest that the more cautious threshold settings of older participants may be due to a reduction of white matter integrity in corticostriatal tracts that connect the pre-SMA to the striatum. These results are consistent with the striatal account of the speed-accuracy tradeoff according to which an increased emphasis on response speed increases the cortical input to the striatum, resulting in global disinhibition of the cortex. Our findings suggest that the unwillingness of older adults to adopt fast speed-accuracy tradeoff settings may not just reflect a strategic choice that is entirely under voluntary control, but that it may also reflect structural limitations: age-related decrements in brain connectivity.
Publisher: Elsevier BV
Date: 08-2006
Publisher: Springer Science and Business Media LLC
Date: 11-2009
Publisher: Wiley
Date: 21-02-2019
DOI: 10.1002/PON.5024
Abstract: Using a vignette-style DCE in a s le of oncology patients, this study explored: (1) the relative influence of the patient's level of concern about their depression on preferences for care, (2) the relative influence of depression severity according to a mental health checklist on preferred treatment-seeking options, and (3) whether patient age and gender were associated with depression care preference. A discrete choice experiment (DCE) survey of cancer patients was conducted. Hypothetical vignettes to elicit care preferences were created using two attributes: the cancer patient's level of concern about depression (a little or a great deal) and results of a mental health checklist (not depressed or very depressed). Three response options for care preferences were presented, including a self-directed approach, shared care approach, and clinician-directed referral approach. Participants chose their most and least preferred options. A total of 281 cancer patients completed the survey. There was a significant association between level of concern and the most preferred option. Those with a great deal of concern about depression preferred to receive referral from their clinician more than those with a little concern about depression. Males were significantly more likely to select a self-directed approach as their most preferred option. An oncology patient's level of concern about depression may influence the type of care they want to receive from their cancer doctor for depression. This finding has implications for depression screening in clinical practice.
Publisher: Wiley
Date: 11-10-2014
DOI: 10.1111/COGS.12094
Abstract: Discrete choice experiments—selecting the best and/or worst from a set of options—are increasingly used to provide more efficient and valid measurement of attitudes or preferences than conventional methods such as Likert scales. Discrete choice data have traditionally been analyzed with random utility models that have good measurement properties but provide limited insight into cognitive processes. We extend a well‐established cognitive model, which has successfully explained both choices and response times for simple decision tasks, to complex, multi‐attribute discrete choice data. The fits, and parameters, of the extended model for two sets of choice data (involving patient preferences for dermatology appointments, and consumer attitudes toward mobile phones) agree with those of standard choice models. The extended model also accounts for choice and response time data in a perceptual judgment task designed in a manner analogous to best–worst discrete choice experiments. We conclude that several research fields might benefit from discrete choice experiments, and that the particular accumulator‐based models of decision making used in response time research can also provide process‐level instantiations for random utility models.
Publisher: Elsevier BV
Date: 06-2011
Publisher: Elsevier BV
Date: 04-2016
Publisher: American Psychological Association (APA)
Date: 03-2020
DOI: 10.1037/REV0000166
Abstract: Independent racing evidence-accumulator models have proven fruitful in advancing understanding of rapid decisions, mainly in the case of binary choice, where they can be relatively easily estimated and are known to account for a range of benchmark phenomena. Typically, such models assume a one-to-one mapping between accumulators and responses. We explore an alternative independent-race framework where more than one accumulator can be associated with each response, and where a response is triggered when a sufficient number of accumulators associated with that response reach their thresholds. Each accumulator is primarily driven by the difference in evidence supporting one versus another response (i.e., that response's "advantage"), with secondary inputs corresponding to the total evidence for both responses and a constant term. We use Brown and Heathcote's (2008) linear ballistic accumulator (LBA) to instantiate the framework in a mathematically tractable measurement model (i.e., a model whose parameters can be successfully recovered from data). We show this "advantage LBA" model provides a detailed quantitative account of a variety of benchmark binary and multiple choice phenomena that traditional independent accumulator models struggle with in binary choice the effects of additive versus multiplicative changes to input values, and in multiple choice the effects of manipulations of the strength of lure (i.e., nontarget) stimuli and Hick's law. We conclude that the advantage LBA provides a tractable new avenue for understanding the dynamics of decisions among multiple choices. (PsycINFO Database Record (c) 2020 APA, all rights reserved).
Publisher: Elsevier BV
Date: 03-2016
DOI: 10.1016/J.BPSC.2015.11.004
Abstract: Cognitive neuroscientists sometimes apply formal models to investigate how the brain implements cognitive processes. These models describe behavioral data in terms of underlying, latent variables linked to hypothesized cognitive processes. A goal of model-based cognitive neuroscience is to link these variables to brain measurements, which can advance progress in both cognitive and neuroscientific research. However, the details and the philosophical approach for this linking problem can vary greatly. We propose a continuum of approaches that differ in the degree of tight, quantitative, and explicit hypothesizing. We describe this continuum using four points along it, which we dub qualitative structural, qualitative predictive, quantitative predictive, and single model linking approaches. We further illustrate by providing ex les from three research fields (decision making, reinforcement learning, and symbolic reasoning) for the different linking approaches.
Publisher: American Psychological Association (APA)
Date: 2007
Publisher: Wiley
Date: 04-08-2017
DOI: 10.1111/PSYP.12971
Abstract: In cued task switching, performance relies on proactive and reactive control processes. Proactive control is evident in the reduction in switch cost under conditions that promote advance preparation. However, the residual switch cost that remains under conditions of optimal proactive control indicates that, on switch trials, the target continues to elicit interference that is resolved using reactive control. We examined whether posttarget interference varies as a function of trial-by-trial variability in preparation. We investigated target congruence effects on behavior and target-locked ERPs extracted across the response time (RT) distribution, using orthogonal polynomial trend analysis (OPTA). Early N2, late N2, and P3b litudes were differentially modulated across the RT distribution. There was a large congruence effect on late N2 and P3b, which increased with RT for P3b litude, but did not vary with trial type. This suggests that target properties impact switch and repeat trials equally and do not contribute to residual switch cost. P3b litude was larger, and latency later, for switch than repeat trials, and this difference became larger with increasing RT, consistent with sustained carryover effects on highly prepared switch trials. These results suggest that slower, less prepared responses are associated with greater target-related interference during target identification and processing, as well as slower, more difficult decision processes. They also suggest that neither general nor switch-specific preparation can ameliorate the effects of target-driven interference. These findings highlight the theoretical advances achieved by integrating RT distribution analyses with ERP and OPTA to examine trial-by-trial variability in performance and brain function.
Publisher: Elsevier BV
Date: 10-2015
Publisher: Elsevier BV
Date: 2013
DOI: 10.1016/J.RIDD.2012.07.025
Abstract: 22q11.2 deletion syndrome (22q11DS) has a complex phenotype with more than 180 characteristics, including cardiac anomalies, cleft palate, intellectual disabilities, a typical facial morphology, and mental health problems. However, the variable phenotype makes it difficult to predict clinical outcome, such as the high prevalence of psychosis among adults with 22q11DS (~25-30% vs. ~1% in the general population). The purpose of this study was to investigate whether subtypes exist among people with 22q11DS, with a similar phenotype and an increased risk of developing mental health problems. Physical, cognitive and behavioural data from 50 children and adolescents with 22q11DS were included in a k-means cluster analysis. Two distinct phenotypes were identified: Type-1 presented with a more severe phenotype including significantly impaired verbal memory, lower intellectual and academic ability, as well as statistically significant reduced total brain volume. In addition, we identified a trend effect for reduced temporal grey matter. Type-1 also presented with autism-spectrum traits, whereas Type-2 could be described as having more 22q11DS-typical face morphology, being predominately affected by executive function deficits, but otherwise being relatively high functioning with regard to cognition and behaviour. The confirmation of well-defined subtypes in 22q11DS can lead to better prognostic information enabling early identification of people with 22q11DS at high risk of psychiatric disorders. The identification of subtypes in a group of people with a relatively homogenous genetic deletion such as 22q11DS is also valuable to understand clinical outcomes.
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 24-08-2012
DOI: 10.1167/12.8.15
Publisher: Springer New York
Date: 2015
Publisher: Elsevier BV
Date: 02-2014
No related grants have been discovered for Scott Brown.