ORCID Profile
0000-0002-2784-5535
Current Organisations
Oak Ridge Institute for Science and Education
,
University of Texas at San Antonio
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
Publisher: Center for Open Science
Date: 27-10-2022
Abstract: Despite the ubiquitous nature of Evidence Accumulation Models in cognitive and experimental psychology there has been a comparatively limited uptake of such techniques in the applied literature. While quantifying latent cognitive processing properties has huge potential for applied domains such as adaptive work systems, accumulator models often fall short in practical applications. Two primary reasons for these shortcomings are the complexities and time needed for the application of cognitive models, and the failure of current models to capture systematic trial-to-trial variability in parameters. In this manuscript we develop a novel, trial-varying extension of the Shifted Wald model to address these concerns. By leveraging conjugate properties of the Wald distribution we derive analytic solutions for threshold and drift parameters which can be updated instantaneously with new data. The resulting model allows the quantification systematic variation in latent cognitive parameters across trials and we demonstrate the utility of such analyses through simulations and an exemplar application to an existing data set. The analytic nature of our solutions opens the door for real-world application, significantly extending the reach of computational models of behavioral responses.
Publisher: Elsevier BV
Date: 10-2019
Publisher: Elsevier BV
Date: 10-2019
Publisher: Elsevier BV
Date: 2014
Publisher: Cambridge University Press (CUP)
Date: 2017
DOI: 10.1017/S0140525X16000157
Abstract: Much of the evidence for theories in visual search (including Hulleman & Olivers' [H& O's]) comes from inferences made using changes in mean RT as a function of the number of items in a display. We have known for more than 40 years that these inferences are based on flawed reasoning and obscured by model mimicry. Here we describe a method that avoids these problems.
Publisher: Elsevier BV
Date: 10-2019
Publisher: Center for Open Science
Date: 30-08-2022
Abstract: Collaboration in shared environments requires human agents to coordinate their behaviour according to the machines’ actions. In this study, we compared the performance and behaviour of Human-Machine (HM) and Human-Human (HH) teams. While HH teaming behaviour is sensitive to Collaborative contexts, little is known about HM teaming behaviour. Furthermore, teaming behaviour may impact the team’s Joint Capacity – the team’s ability to handle teamwork processes and task demands. To assess teaming behaviour at every moment of a trial we used three distinct spatiotemporal measures (Momentary Distance, Highly Correlated Segments, and Running Correlation). To assess the team’s joint performance, we adopted the Capacity Coefficient (Townsend & Nozawa,1995). For both HH and HM teams, behavioural measures predicted Joint Capacity. HH teams demonstrated greater performance and less synchronous behaviour than HM teams. The reduced synchrony of HH teams likely improved their performance as they could complement each other’s behaviour ratherthan duplicate inefficiencies
Publisher: Springer Science and Business Media LLC
Date: 08-08-2016
DOI: 10.3758/S13428-016-0784-3
Abstract: The extent to which distracting information influences decisions can be informative about the nature of the underlying cognitive and perceptual processes. In a recent paper, a response time-based measure for quantifying the degree of interference (or facilitation) from distracting information termed resilience was introduced. Despite using a statistical measure, the analysis was limited to qualitative comparisons between different model predictions. In this paper, we demonstrate how statistical procedures from workload capacity analysis can be applied to the new resilience functions. In particular, we present an approach to null-hypothesis testing of resilience functions and a method based on functional principal components analysis for analyzing differences in the functional form of the resilience functions across participants and conditions.
Publisher: Center for Open Science
Date: 09-2022
Abstract: In the modern world, there are important tasks that have become too complex for a single unaided in idual to manage. Some safety-critical tasks are conducted by teams to improve task performance and minimize risk of error. These teams have traditionally consisted of human operators, yet nowadays AI and machine systems are incorporated into team environments to improve performance and capacity. We used a computerized task, modeled after a classic arcade game, to investigate the performance of human-machine and human-human teams. We manipulated the group conditions between team members sometimes they were incentivised to collaborate, sometimes compete, and sometimes to work separately. We evaluated players’ performance in the main task (game play) and also measured the cognitive workload they experienced. We compared workload and game performance between different team types (human-human vs. human-machine) and different group conditions (competitive, collaborate, independent). Adapting workload capacity analysis to human-machine teams, we found performance under both team types and all group conditions suffered a performance efficiency cost. However, we observed a reduced cost in collaborative over competitive teams within human-human pairings but this effect was diminished when playing with a machine partner. The implications of workload capacity analysis as a powerful tool for human-machine team performance measurement are discussed.
Publisher: Springer Science and Business Media LLC
Date: 12-05-2015
DOI: 10.3758/S13421-015-0526-2
Abstract: We examined the role of dual-task interference in working memory using a novel dual two-back task that requires a redundant-target response (i.e., a response that neither the auditory nor the visual stimulus occurred two back versus a response that one or both occurred two back) on every trial. Comparisons with performance on single two-back trials (i.e., with only auditory or only visual stimuli) showed that dual-task demands reduced both speed and accuracy. Our task design enabled a novel application of Townsend and Nozawa's (Journal of Mathematical Psychology 39: 321-359, 1995) workload capacity measure, which revealed that the decrement in dual two-back performance was mediated by the sharing of a limited amount of processing capacity. Relative to most other single and dual n-back tasks, performance measures for our task were more reliable, due to the use of a small stimulus set that induced a high and constant level of proactive interference. For a version of our dual two-back task that minimized response bias, accuracy was also more strongly correlated with complex span than has been found for most other single and dual n-back tasks.
Publisher: American Psychological Association (APA)
Date: 2014
DOI: 10.1037/A0035947
Abstract: The ability to trade accuracy for speed is fundamental to human decision making. The speed-accuracy trade-off (SAT) effect has received decades of study, and is well understood in relatively simple decisions: collecting more evidence before making a decision allows one to be more accurate but also slower. The SAT in more complex paradigms has been given less attention, largely due to limits in the models and statistics that can be applied to such tasks. Here, we have conducted the first analysis of the SAT in multiple signal processing, using recently developed technologies for measuring capacity that take into account both response time and choice probability. We show that the primary influence of caution in our redundant-target experiments is on the threshold amount of evidence required to trigger a response. However, in a departure from the usual SAT effect, we found that participants strategically ignored redundant information when they were forced to respond quickly, but only when the additional stimulus was reliably redundant. Interestingly, because the capacity of the system was severely limited on redundant-target trials, ignoring additional targets meant that processing was more efficient when making fast decisions than when making slow and accurate decisions, where participants' limited resources had to be ided between the 2 stimuli.
Location: United Kingdom of Great Britain and Northern Ireland
Location: United States of America
Location: United States of America
No related grants have been discovered for Joseph Houpt.