ORCID Profile
0000-0002-2772-5728
Current Organisation
University of Adelaide
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
Publisher: Frontiers Media SA
Date: 20-12-2019
Publisher: Elsevier BV
Date: 06-2019
Publisher: American Psychological Association (APA)
Date: 04-2019
DOI: 10.1037/XLM0000753
Abstract: Four experiments examined the claims that people can intuitively assess the logical validity of arguments, and that qualitatively different reasoning processes drive intuitive and explicit validity assessments. In each study participants evaluated arguments varying in validity and believability using either deductive criteria (logic task) or via an intuitive, affective response (liking task). Experiment 1 found that people are sensitive to argument validity on both tasks, with valid arguments receiving higher liking as well as higher deductive ratings than invalid arguments. However, the claim that this effect is driven by logical intuitions was challenged by the finding that sensitivity to validity in both liking and logic tasks was affected in similar ways by manipulations of concurrent memory load (Experiments 1 and 2) and variations in in idual working memory capacity (Experiments 3 and 4). In both tasks better discrimination between valid and invalid arguments was found when more working memory resources were available. Formal signal detection models of reasoning were tested against the experimental data using signed difference analysis (Stephens, Dunn, & Hayes, 2018b). A single-process reasoning model which assumes that argument evaluation in both logic and liking tasks involves a single latent dimension for assessing argument strength but different response criteria for each task, was found to be consistent with the data from each experiment (as were some dual-process models). The experimental and modeling results confirm that people are sensitive to argument validity in both explicit logic and affect rating tasks, but that these results can be explained by a single underlying reasoning process. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
Publisher: Springer Science and Business Media LLC
Date: 25-01-2019
Publisher: American Psychological Association (APA)
Date: 02-2019
DOI: 10.1037/XLM0000587
Abstract: When asked to determine whether a syllogistic argument is deductively valid, people are influenced by their prior beliefs about the believability of the conclusion. Recently, two competing explanations for this belief bias effect have been proposed, each based on signal detection theory (SDT). Under a
Publisher: American Psychological Association (APA)
Date: 09-2018
DOI: 10.1037/XLM0000528
Abstract: Delayed feedback during categorization training has been hypothesized to differentially affect 2 systems that underlie learning for rule-based (RB) or information-integration (II) structures. We tested an alternative possibility: that II learning requires more precise item representations than RB learning, and so is harmed more by a delay interval filled with a confusable mask. Experiments 1 and 2 examined the effect of feedback delay on memory for RB and II exemplars, both without and with concurrent categorization training. Without the training, II items were indeed more difficult to recognize than RB items, but there was no detectable effect of delay on item memory. In contrast, with concurrent categorization training, there were effects of both category structure and delayed feedback on item memory, which were related to corresponding changes in category learning. However, we did not observe the critical selective impact of delay on II classification performance that has been shown previously. Our own results were also confirmed in a follow-up study (Experiment 3) involving only categorization training. The selective influence of feedback delay on II learning appears to be contingent on the relative size of subgroups of high-performing participants, and in fact does not support that RB and II category learning are qualitatively different. We conclude that a key part of successfully solving perceptual categorization problems is developing more precise item representations, which can be impaired by delayed feedback during training. More important, the evidence for multiple systems of category learning is even weaker than previously proposed. (PsycINFO Database Record
Publisher: American Psychological Association (APA)
Date: 03-2018
DOI: 10.1037/REV0000088
Abstract: Single-process accounts of reasoning propose that the same cognitive mechanisms underlie inductive and deductive inferences. In contrast, dual-process accounts propose that these inferences depend upon 2 qualitatively different mechanisms. To distinguish between these accounts, we derived a set of single-process and dual-process models based on an overarching signal detection framework. We then used signed difference analysis to test each model against data from an argument evaluation task, in which induction and deduction judgments are elicited for sets of valid and invalid arguments. Three data sets were analyzed: data from Singmann and Klauer (2011), a database of argument evaluation studies, and the results of an experiment designed to test model predictions. Of the large set of testable models, we found that almost all could be rejected, including all 2-dimensional models. The only testable model able to account for all 3 data sets was a model with 1 dimension of argument strength and independent decision criteria for induction and deduction judgments. We conclude that despite the popularity of dual-process accounts, current results from the argument evaluation task are best explained by a single-process account that incorporates separate decision thresholds for inductive and deductive inferences. (PsycINFO Database Record
Publisher: Elsevier BV
Date: 06-2020
Publisher: American Psychological Association (APA)
Date: 09-2018
DOI: 10.1037/XLM0000527
Abstract: Three-experiments examined the number of qualitatively different processing dimensions needed to account for inductive and deductive reasoning. In each study, participants were presented with arguments that varied in logical validity and consistency with background knowledge (believability), and evaluated them according to deductive criteria (whether the conclusion was necessarily true given the premises) or inductive criteria (whether the conclusion was plausible given the premises). We examined factors including working memory load (Experiments 1 and 2), in idual working memory capacity (Experiments 1 and 2), and decision time (Experiment 3), which according to dual-processing theories, modulate the contribution of heuristic and analytic processes to reasoning. A number of empirical dissociations were found. Argument validity affected deduction more than induction. Argument believability affected induction more than deduction. Lower working memory capacity reduced sensitivity to argument validity and increased sensitivity to argument believability, especially under induction instructions. Reduced decision time led to decreased sensitivity to argument validity. State-trace analyses of each experiment, however, found that only a single underlying dimension was required to explain patterns of inductive and deductive judgments. These results show that the dissociations, which have traditionally been seen as supporting dual-processing models of reasoning, are consistent with a single-process model that assumes a common evidentiary scale for induction and deduction. (PsycINFO Database Record
Publisher: American Psychological Association (APA)
Date: 11-2022
DOI: 10.1037/XLM0001105
Abstract: Much recent research and theorizing in the field of reasoning has been concerned with intuitive sensitivity to logical validity, such as the logic-brightness effect, in which logically valid arguments are judged to have a "brighter" typeface than invalid arguments. We propose and test a novel signal competition account of this phenomenon. Our account makes two assumptions: (a) as per the demands of the logic-brightness task, people attempt to find a perceptual signal to guide brightness judgments, but (b) when the perceptual signal is hard to discern, they instead attend to cues such as argument validity. Experiment 1 tested this account by manipulating the difficulty of the perceptual contrast. When contrast discrimination was relatively difficult, we replicated the logic-brightness effect. When the discrimination was easy, the effect was eliminated. Experiment 2 manipulated the ambiguity of the perceptual task, comparing discrimination performance when the perceptual contrast was labeled in terms of rating "brightness" or "darkness". When the less ambiguous darkness labeling was used, there was no evidence of a logic-brightness effect. In both experiments, in idual sensitivity to the perceptual discrimination was negatively correlated with sensitivity to argument validity. Hierarchical latent mixture modeling revealed distinct in idual strategies: responses based on perceptual cues, responses based on validity or guessing. Consistent with the signal competition account, the proportion of those responding to validity increased with perceptual discrimination difficulty or task ambiguity. The results challenge explanations of the logic-brightness effect based on parallel dual-process models of reasoning. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
Publisher: American Psychological Association (APA)
Date: 09-2017
DOI: 10.1037/XAP0000130
Abstract: Unfamiliar, one-to-one face matching has been shown to be error-prone. However, it is unknown whether there is a strong relationship between confidence and accuracy in this task. If there is, then confidence could be used as an indicator of accuracy in real-world face matching settings such as border security, where the objectively correct decision is typically unknown. Two experiments examined the overall confidence-accuracy relationship, as well as the relationship for positive (match) and negative (mismatch) decisions. Furthermore, they tested whether these relationships were affected by factors relevant to applied face matching settings: the proportion of mismatching trials (PMT), and the task orientation of the decision-maker (look for matches, or look for mismatches). Both calibration analyses and signal detection methods were applied to assess performance. The results showed that confidence can have a high correspondence with accuracy overall, regardless of task orientation but with small effects of PMT. Thus, confidence is promising as an indicator of accuracy in face matching. However, PMT systematically produces large detrimental effects on the confidence-accuracy relationships for positive and negative decisions, when considered separately. Signal detection measures help with understanding these effects and proposing future research directions for improving the relationships. (PsycINFO Database Record
Publisher: Springer Science and Business Media LLC
Date: 03-04-2015
DOI: 10.3758/S13421-015-0522-6
Abstract: Making accurate predictions about events is an important but difficult task. Recent work suggests that people are adept at this task, making predictions that reflect surprisingly accurate knowledge of the distributions of real quantities. Across three experiments, we used an iterated learning procedure to explore the basis of this knowledge: to what extent is domain experience critical to accurate predictions and how accurate are people when faced with unfamiliar domains? In Experiment 1, two groups of participants, one resident in Australia, the other in China, predicted the values of quantities familiar to both (movie run-times), unfamiliar to both (the lengths of Pharaoh reigns), and familiar to one but unfamiliar to the other (cake baking durations and the lengths of Beijing bus routes). While predictions from both groups were reasonably accurate overall, predictions were inaccurate in the selectively unfamiliar domains and, surprisingly, predictions by the China-resident group were also inaccurate for a highly familiar domain: local bus route lengths. Focusing on bus routes, two follow-up experiments with Australia-resident groups clarified the knowledge and strategies that people draw upon, plus important determinants of accurate predictions. For unfamiliar domains, people appear to rely on extrapolating from (not simply directly applying) related knowledge. However, we show that people's predictions are subject to two sources of error: in the estimation of quantities in a familiar domain and extension to plausible values in an unfamiliar domain. We propose that the key to successful predictions is not simply domain experience itself, but explicit experience of relevant quantities.
No related grants have been discovered for Rachel Stephens.