ORCID Profile
0000-0002-3950-3460
Current Organisation
University of Western Australia
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
In Research Link Australia (RLA), "Research Topics" refer to ANZSRC FOR and SEO codes. These topics are either sourced from ANZSRC FOR and SEO codes listed in researchers' related grants or generated by a large language model (LLM) based on their publications.
Psychology | Learning, Memory, Cognition And Language | Psychological Methodology, Design and Analysis | Psychological Methodology, Design And Analysis | Psychology and Cognitive Sciences not elsewhere classified | Biological Psychology (Neuropsychology, Psychopharmacology, | Cognitive Science | Other Psychology and Cognitive Sciences | Linguistic Processes (Incl. Speech Production And Comprehension) | Biological Psychology (Neuropsychology, Psychopharmacology, Physiological Psychology) | Cognitive Science not elsewhere classified | Psychology not elsewhere classified | Forensic Psychology | Decision Making | Developmental Psychology and Ageing
Expanding Knowledge in Psychology and Cognitive Sciences | Behavioural and cognitive sciences | Mental health | Expanding Knowledge in the Information and Computing Sciences | Nervous system and disorders | Expanding Knowledge in the Biological Sciences | Law Enforcement | Diagnostic methods | Expanding Knowledge in the Mathematical Sciences |
Publisher: Editorial Pontificia Universidad Javeriana
Date: 31-12-1969
DOI: 10.11144/JAVERIANA.UPSY12-5.ASSP
Abstract: A series of experiments were devised to test the idea that sensorimotor systems activate during the processing of emotionally laden stimuli. In Experiments 1 and 2 participants were asked to judge the pleasantness of emotionally laden sentences while participants held a pen in the mouth. Experiments 3 and 4 were similar to the previous experiments, but the experimental materials were emotionally laden images. In Experiment 5 and 6 the same bodily manipulation used throughout the previous experiments was kept while participants judged facial expressions. The first pair of experiments replicated findings suggesting that sensorimotor systems are activated during the processing of emotionally laden language. However, follow-up experiments suggested that dual activation of both perceptual and motor systems is not always necessary. For the particular case of emotionally laden stimuli, results suggested that the perceptual system seems to drive the processing. It is also shown that a high resonance between sensorimotor properties afforded by the stimuli and the sensorimotor systems activated in the cogniser elicit emotional states. The results invite to review radical versions of embodiment accounts and rather support a graded-embodiment view.
Publisher: Informa UK Limited
Date: 12-2006
DOI: 10.1080/13803390500409617
Abstract: The present study examined the validity of cognitive assessment in older adults when administered in a second language (English). A battery of tests that included the MMSE, CAMCOG and the Logical Memory Test of the Wechsler Memory Scale III, was administered to 121 older community volunteers of either an English Speaking Background (ESB) or a Non-English Speaking Background (NESB) living in the metropolitan area of Perth, Western Australia. The logical memory test was scored using Latent Semantic Analysis. It was hypothesized that this scoring method would be less affected by cultural and linguistic differences than standard scoring methods. The results suggest that LSA is a more robust measure of cognitive function than traditional scoring methods and may therefore improve the validity of cognitive assessment results on subjects of NESB.
Publisher: Springer Science and Business Media LLC
Date: 04-08-2020
DOI: 10.1186/S41235-020-00234-5
Abstract: Debate regarding the best way to test and measure eyewitness memory has dominated the eyewitness literature for more than 30 years. We argue that resolution of this debate requires the development and application of appropriate measurement models. In this study we developed models of simultaneous and sequential lineup presentations and used these to compare these procedures in terms of underlying discriminability and response bias, thereby testing a key prediction of diagnostic feature detection theory, that underlying discriminability should be greater for simultaneous than for stopping-rule sequential lineups. We fit the models to the corpus of studies originally described by Palmer and Brewer (2012, Law and Human Behavior, 36 (3), 247–255), to data from a new experiment and to eight recent studies comparing simultaneous and sequential lineups. We found that although responses tended to be more conservative for sequential lineups there was little or no difference in underlying discriminability between the two procedures. We discuss the implications of these results for the diagnostic feature detection theory and other kinds of sequential lineups used in current jurisdictions.
Publisher: Ovid Technologies (Wolters Kluwer Health)
Date: 2021
Abstract: A 22-year-old female patient demonstrated physical examination findings of Parsonage-Turner syndrome (PTS) 5 days after left shoulder arthroscopic surgery with interscalene brachial plexus block. The diagnosis was confirmed with electrodiagnostic testing 2 weeks after surgery. Symptoms resolved spontaneously within 2 years with full return-to-preinjury sport and job activity. These outcomes were maintained at the 10-year follow-up. PTS should be considered in the differential diagnoses for any postsurgical neurological variations after upper extremity surgery under regional anesthesia.
Publisher: Center for Open Science
Date: 13-02-2020
Abstract: Dual-process theories posit that separate kinds of intuitive (Type 1) and reflective (Type 2) processes contribute to reasoning. Under this view, inductive judgments are more heavily influenced by Type 1 processing, and deductive judgments are more strongly influenced by Type 2 processing. Alternatively, single-process theories propose that both types of judgments are based on a common form of assessment. The competing accounts were respectively instantiated as two-dimensional and one-dimensional signal detection models, and their predictions were tested against specifically targeted novel data using signed difference analysis. In two experiments, participants evaluated valid and invalid arguments, under induction or deduction instructions. Arguments varied in believability and type of conditional argument structure. Additionally, we used logic training to strengthen Type 2 processing in deduction (Experiments 1 & 2) and belief training to strengthen Type 1 processing in induction (Experiment 2). The logic training successfully improved validity-discrimination, and differential effects on induction and deduction judgments were evident in Experiment 2. While such effects are consistent with popular dual-process accounts, crucially, a one-dimensional model successfully accounted for the results. We also demonstrate that the one-dimensional model is psychologically interpretable, with the model parameters varying sensibly across conditions. We argue that single-process accounts have been prematurely discounted, and formal modeling approaches are important for theoretical progress in the reasoning field.
Publisher: Springer Science and Business Media LLC
Date: 07-2010
DOI: 10.3758/MC.38.5.563
Publisher: Center for Open Science
Date: 19-03-2020
Abstract: Paul Meehl’s famous critique laid out in detail many of the pathological practices and conceptual confusions that stand in the way of meaningful theoretical progress inpsychological science. Integrating some of Meehl’s points, we argue that one of the reasons for the slow progress in psychology is the failure to acknowledge the problem of coordination. This problem arises whenever we attempt to measure quantities that are not directly observable, but can be inferred from observable variables. The solution to this problem is far from trivial, as demonstrated by a historical analysis of thermometry. Also, it is not a problem that can be solved by empirical means. At its center is the need for a clear understanding of the functional relations between theoretical concepts and observations. In the case of psychology, the problem of coordination has dramatic implications in the sense that it severely limits our ability to make meaningful theoretical claims. We discuss several ex les and lay out some of the solutions that are currently available.
Publisher: Elsevier BV
Date: 06-2019
Publisher: Elsevier BV
Date: 1982
Publisher: American Psychological Association (APA)
Date: 1988
DOI: 10.1037/0033-295X.95.1.91
Abstract: To evaluate the effect of maternal smoking during pregnancy on levels of umbilical cord erythropoietin. Erythropoietin levels were measured in umbilical cord sera of 60 newborns who were delivered vaginally at term. There were 20 (33%) smoking and 40 (67%) nonsmoking mothers. Mean cord serum erythropoietin levels were significantly lower in the nonsmokers (nonsmokers, 24 ± 9 IU/L smokers, 61 ± 46 IU/L P < .001). There was a significant positive correlation between the number of cigarettes smoked per day and cord serum erythropoietin levels (r, 0.58 P ≤ .05). Smoking during pregnancy is associated with increased levels of umbilical cord erythropoietin at birth. This may indicate a risk of fetal hypoxia and growth restriction. Education and encouragement of cessation of smoking during pregnancy are important to avoid associated fetal and maternal morbidity and mortality.
Publisher: Informa UK Limited
Date: 03-2011
Publisher: American Psychological Association (APA)
Date: 1983
Publisher: American Psychological Association (APA)
Date: 02-2019
DOI: 10.1037/XLM0000587
Abstract: When asked to determine whether a syllogistic argument is deductively valid, people are influenced by their prior beliefs about the believability of the conclusion. Recently, two competing explanations for this belief bias effect have been proposed, each based on signal detection theory (SDT). Under a
Publisher: Elsevier BV
Date: 02-2012
Publisher: Center for Open Science
Date: 17-10-2022
Abstract: van Doorn et al. (2021) outlined various questions that arise when conducting Bayesian model comparison for mixed effects models. Seven response articles offered their own perspective on the preferred setup for mixed model comparison, on the most appropriate specification of prior distributions, and on the desirability of default recommendations. This article presents a round-table discussion that aims to clarify outstanding issues, explore common ground, and outline practical considerations for any researcher wishing to conduct a Bayesian mixed effects model comparison.
Publisher: Routledge
Date: 19-09-2016
Publisher: Cambridge University Press (CUP)
Date: 2023
DOI: 10.1017/S0140525X22002916
Abstract: De Neys offers a welcome departure from the dual-process accounts that have dominated theorizing about reasoning. However, we see little justification for retaining the distinction between intuition and deliberation. Instead, reasoning can be treated as a case of multiple-cue decision making. Reasoning phenomena can then be explained by decision-making models that supply the processing details missing from De Neys's framework.
Publisher: Elsevier BV
Date: 02-2016
Publisher: Informa UK Limited
Date: 02-2002
Abstract: The aim of this study was to compare traditional methods of scoring the Logical Memory test of the Wechsler Memory Scale-III with a new method based on Latent Semantic Analysis (LSA). LSA represents texts as vectors in a high-dimensional semantic space and the similarity of any two texts is measured by the cosine of the angle between their respective vectors. The Logical Memory test was administered to a s le of 72 elderly in iduals, 14 of whom were classified as cognitively impaired by the Mini-Mental State Examination (MMSE). The results showed that LSA was at least as valid and sensitive as traditional measures. Partial correlations between prose recall measures and measures of cognitive function indicated that LSA explained all the relationship between Logical Memory and general cognitive function. This suggests that LSA may serve as an improved measure of prose recall.
Publisher: Elsevier BV
Date: 06-2019
Publisher: Center for Open Science
Date: 09-07-2021
Abstract: What is the effect of placing the suspect in different positions in a sequential lineup? To explore this question, we developed and applied a model called the Independent Sequential Lineup model which analyzes a sequential lineup in terms of both identification position, the position at which the witness identifies a lineup item as the target, and target position, the position at which the target or suspect appears. We conducted a large-scale online eyewitness memory experiment with 7,204 participants each of whom was tested on a 6-item sequential lineup with an explicit stopping rule. The model fit these data well and revealed systematic effects of lineup position on underlying discriminability and response criteria. We also fit the model to data from a similar pair of experiments conducted recently by Wilson, Donnelly, Christenfeld and Wixted (2019 Journal of Memory and Language, 104, 108-125) both with and without application of a stopping rule. In all data sets, if a stopping rule is applied, underlying discriminability was found to be constant, or to increase slightly, across target position. In the absence of a stopping rule, discriminability was found to decrease substantially. We also observed a substantial increase in response criteria following presentation of the target. We discuss the implications of these findings for current theories of recognition memory and current applications of the sequential lineup in different jurisdictions.
Publisher: Proceedings of the National Academy of Sciences
Date: 22-12-2015
Abstract: In contrast to prior research, recent studies of simulated crimes have reported that ( i ) eyewitness confidence can be a strong indicator of accuracy and ( ii ) traditional simultaneous lineups may be diagnostically superior to sequential lineups. The significance of our study is that these issues were investigated using actual eyewitnesses to a crime. Recent laboratory trends were confirmed: Eyewitness confidence was strongly related to accuracy, and simultaneous lineups were, if anything, diagnostically superior to sequential lineups. These results suggest that recent reforms in the legal system, which were based on the results of older research, may need to be reevaluated.
Publisher: American Psychological Association (APA)
Date: 11-2022
DOI: 10.1037/XLM0001105
Abstract: Much recent research and theorizing in the field of reasoning has been concerned with intuitive sensitivity to logical validity, such as the logic-brightness effect, in which logically valid arguments are judged to have a "brighter" typeface than invalid arguments. We propose and test a novel signal competition account of this phenomenon. Our account makes two assumptions: (a) as per the demands of the logic-brightness task, people attempt to find a perceptual signal to guide brightness judgments, but (b) when the perceptual signal is hard to discern, they instead attend to cues such as argument validity. Experiment 1 tested this account by manipulating the difficulty of the perceptual contrast. When contrast discrimination was relatively difficult, we replicated the logic-brightness effect. When the discrimination was easy, the effect was eliminated. Experiment 2 manipulated the ambiguity of the perceptual task, comparing discrimination performance when the perceptual contrast was labeled in terms of rating "brightness" or "darkness". When the less ambiguous darkness labeling was used, there was no evidence of a logic-brightness effect. In both experiments, in idual sensitivity to the perceptual discrimination was negatively correlated with sensitivity to argument validity. Hierarchical latent mixture modeling revealed distinct in idual strategies: responses based on perceptual cues, responses based on validity or guessing. Consistent with the signal competition account, the proportion of those responding to validity increased with perceptual discrimination difficulty or task ambiguity. The results challenge explanations of the logic-brightness effect based on parallel dual-process models of reasoning. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
Publisher: Elsevier BV
Date: 11-2006
Publisher: Center for Open Science
Date: 13-12-2022
Abstract: Loftus [(1978), Memory & Cognition, 6, 312-319] highlighted the distinction between a theoretical concept such as memory or attention, and its observed measure such as hit rate or percent correct. If the functional relationship between the concept and its measure is non-linear then only some interaction effects are interpretable. This is an ex le of the wider 'problem of coordination' which pervades scientific measurement. Loftus drew on the principles of additive conjoint measurement (ACM) to discuss the consequences when the coordination function is assumed to be monotonic. This led to the distinction between removable interactions that are consistent with an additive effect on the underlying theoretical concept and nonremovable interactions that are not. However, the adoption of these ideas by researchers has been greatly limited by the fact that no statistical procedure exists to determine if and to what extent an interaction is removable or otherwise. The lack of such a procedure has similarly limited the impact of ACM on research practice. The aim of this paper is to present such a procedure.
Publisher: American Psychological Association (APA)
Date: 09-2018
DOI: 10.1037/XAP0000157
Abstract: Estimator variables are factors that can affect the accuracy of eyewitness identifications but that are outside of the control of the criminal justice system. Ex les include (1) the duration of exposure to the perpetrator, (2) the passage of time between the crime and the identification (retention interval), (3) the distance between the witness and the perpetrator at the time of the crime. Suboptimal estimator variables (e.g., long distance) have long been thought to reduce the reliability of eyewitness identifications (IDs), but recent evidence suggests that this is not true of IDs made with high confidence and may or may not be true of IDs made with lower confidence. The evidence suggests that though suboptimal estimator variables decrease discriminability (i.e., the ability to distinguish innocent from guilty suspects), they do not decrease the reliability of IDs made with high confidence. Such findings are inconsistent with the longstanding "optimality hypothesis" and therefore require a new theoretical framework. Here, we propose that a signal-detection-based likelihood ratio account-which has long been a mainstay of basic theories of recognition memory-naturally accounts for these findings. (PsycINFO Database Record
Publisher: Center for Open Science
Date: 12-02-2020
Abstract: The debate regarding the best way to test and measure eyewitness memory has dominated the eyewitness literature for more than thirty years. We argue that to resolve this debate requires the development and application of appropriate measurement models. In this study we develop models of simultaneous and sequential lineup presentations and use these to compare the procedures in terms of discriminability and response bias. We tested a key prediction of the diagnostic feature detection hypothesis that discriminability should be greater for simultaneous than sequential lineups. We fit the models to the corpus of studies originally described by Palmer and Brewer (2012, Law and Human Behavior, 36(3), 247-255) and to data from a new experiment. The results of both investigations showed that discriminability did not differ between the two procedures, while responses were more conservative for sequential presentation compared to simultaneous presentation. We conclude that the two procedures do not differ in the efficiency with which they allow eyewitness memory to be expressed. We discuss the implications of this for the diagnostic feature detection hypothesis and other sequential lineup procedures used in current jurisdictions.
Publisher: Wiley
Date: 2002
DOI: 10.1002/ACP.846
Publisher: American Psychological Association (APA)
Date: 2012
DOI: 10.1037/A0027867
Abstract: Evidence that learning rule-based (RB) and information-integration (II) category structures can be dissociated across different experimental variables has been used to support the view that such learning is supported by multiple learning systems. Across 4 experiments, we examined the effects of 2 variables, the delay between response and feedback and the informativeness of feedback, which had previously been shown to dissociate learning of the 2 types of category structure. Our aim was twofold: first, to determine whether these dissociations meet the more stringent inferential criteria of state-trace analysis and, second, to determine the conditions under which they can be observed. Experiment 1 confirmed that a mask-filled feedback delay dissociated the learning of RB and II category structures with minimally informative (yes/no) feedback and also met the state-trace criteria for the involvement of multiple latent variables. Experiment 2 showed that this effect is eliminated when a less similar, fixed pattern mask is presented in the interval between response and feedback. Experiment 3 showed that the selective effect of feedback delay on II learning is reduced with fully informative feedback (in which the correct category is specified after an incorrect response) and that feedback type did not dissociate RB and II learning. Experiment 4 extended the results of Experiment 2, showing that the differential effect of feedback delay is eliminated when a fixed pattern mask is used. These results pose important challenges to models of category learning, and we discuss their implications for multiple learning system models and their alternatives.
Publisher: Wiley
Date: 08-2010
DOI: 10.1002/ACP.1735
Publisher: Center for Open Science
Date: 20-06-2021
Abstract: Statistical modeling is generally meant to describe patterns in data in service of the broader scientific goal of developing theories to explain those patterns. Statistical models support meaningful inferences when models are built so as to align parameters of the model with potential causal mechanisms and how they manifest in data. When statistical models are instead based on assumptions chosen by default, Attempts to draw inferences can be uninformative or even paradoxical—in essence, the tail is trying to wag the dog.These issues are illustrated by van Doorn et al. (in press) in the context of using BayesFactors to identify effects and interactions in linear mixed models. We show that the problems identified in their applications can be circumvented by using priors over inherently meaningful units instead of default priors on standardized scales. This case study illustrates how researchers must directly engage with a number of substantive issues in order to support meaningful inferences, of which we highlight two: The first is the problem of coordination, which requires a researcher to specify how the theoretical constructs postulated by a model are functionally related to observable variables. The second is the problem of generalization, which requires a researcher to consider how a model may represent theoretical constructs shared across similar but non-identical situations, along with the fact that model comparison metrics like Bayes Factors do not directly address this form of generalization. For statistical modeling to serve the goals of science, models cannot be based on default assumptions, but should instead be based on an understanding of their coordination function and on how they represent causal mechanisms that may be expected to generalize to other related scenarios.
Publisher: SAGE Publications
Date: 29-01-2021
Abstract: Paul Meehl’s famous critique detailed many of the problematic practices and conceptual confusions that stand in the way of meaningful theoretical progress in psychological science. By integrating many of Meehl’s points, we argue that one of the reasons for the slow progress in psychology is the failure to acknowledge the problem of coordination. This problem arises whenever we attempt to measure quantities that are not directly observable but can be inferred from observable variables. The solution to this problem is far from trivial, as demonstrated by a historical analysis of thermometry. The key challenge is the specification of a functional relationship between theoretical concepts and observations. As we demonstrate, empirical means alone cannot determine this relationship. In the case of psychology, the problem of coordination has dramatic implications in the sense that it severely constrains our ability to make meaningful theoretical claims. We discuss several ex les and outline some of the solutions that are currently available.
Publisher: American Psychological Association (APA)
Date: 2015
DOI: 10.1037/XLM0000083
Abstract: The strength of conclusions about the adoption of different categorization strategies-and their implications for theories about the cognitive and neural bases of category learning-depend heavily on the techniques for identifying strategy use. We examine performance in an often-used "information-integration" category structure and demonstrate that strategy identification is affected markedly by the range of models under consideration, the type of data collected, and model-selection techniques. We use a set of 27 potential models that represent alternative rule-based and information-integration categorization strategies. Our experimental paradigm includes the presentation of nonreinforced transfer stimuli that improve one's ability to discriminate among the predictions of alternative models. Our model-selection techniques incorporate uncertainty in the identification of in iduals as either rule-based or information-integration strategy users. Based on this analysis we identify 48% of participants as unequivocally using an information-integration strategy. However, adopting the standard practice of using a restricted set of models, restricted data, and ignoring the degree of support for a particular strategy, we would typically conclude that 89% of participants used an information-integration strategy. We discuss the implications of potentially erroneous strategy identification for the security of conclusions about the categorization capabilities of various participant and patient groups.
Publisher: Proceedings of the National Academy of Sciences
Date: 04-02-2013
Abstract: A recurring issue in neuroscience concerns evidence as to whether two or more brain regions implement qualitatively different functions. Here we introduce the application of state-trace analysis to measures of neural activity, illustrating how this analysis can furnish compelling evidence for qualitatively different functions, even when the precise “neurometric” mapping between function and brain measure is unknown. In doing so, we address a long-standing debate about the brain systems supporting human memory: whether the hippoc us and the perirhinal cortex, two key components of the medial temporal lobe memory system, provide qualitatively different contributions to recognition memory. An alternative account has been that both regions support a single shared function, such as memory strength, with the apparent dissociations obtained by previous neuroimaging studies merely reflecting different, nonlinear neurometric mappings across regions. To adjudicate between these scenarios, we analyze intracranial electroencephalographic data obtained directly from human hippoc us and perirhinal cortex during a recognition paradigm and apply state-trace analysis to responses evoked by the retrieval cue as a function of different types of memory judgment. Assuming only that the neurometric mapping in each region is monotonic, any unidimensional theory (such as the memory-strength account) will produce a monotonic state trace. Critically, results showed a nonmonotonic state trace that is, activity levels in the two regions did not show the same relative ordering across memory conditions. This nonmonotonic state trace demonstrates that there are at least two different functions implemented across the hippoc us and perirhinal cortex, allowing formal rejection of a single-process account of medial temporal lobe contributions to recognition memory.
Publisher: MIT Press - Journals
Date: 09-2011
Abstract: To investigate potentially dissociable recognition memory responses in the hippoc us and perirhinal cortex, fMRI studies have often used confidence ratings as an index of memory strength. Confidence ratings, although correlated with memory strength, also reflect sources of variability, including task-irrelevant item effects and differences both within and across in iduals in terms of applying decision criteria to separate weak from strong memories. We presented words one, two, or four times at study in each of two different conditions, focused and ided attention, and then conducted separate fMRI analyses of correct old responses on the basis of subjective confidence ratings or estimates from single- versus dual-process recognition memory models. Overall, the effect of focussing attention on spaced repetitions at study manifested as enhanced recognition memory performance. Confidence- versus model-based analyses revealed disparate patterns of hippoc al and perirhinal cortex activity at both study and test and both within and across hemispheres. The failure to observe equivalent patterns of activity indicates that fMRI signals associated with subjective confidence ratings reflect additional sources of variability. The results are consistent with predictions of single-process models of recognition memory.
Publisher: Elsevier BV
Date: 08-2003
Publisher: Springer Science and Business Media LLC
Date: 03-04-2015
DOI: 10.3758/S13421-015-0522-6
Abstract: Making accurate predictions about events is an important but difficult task. Recent work suggests that people are adept at this task, making predictions that reflect surprisingly accurate knowledge of the distributions of real quantities. Across three experiments, we used an iterated learning procedure to explore the basis of this knowledge: to what extent is domain experience critical to accurate predictions and how accurate are people when faced with unfamiliar domains? In Experiment 1, two groups of participants, one resident in Australia, the other in China, predicted the values of quantities familiar to both (movie run-times), unfamiliar to both (the lengths of Pharaoh reigns), and familiar to one but unfamiliar to the other (cake baking durations and the lengths of Beijing bus routes). While predictions from both groups were reasonably accurate overall, predictions were inaccurate in the selectively unfamiliar domains and, surprisingly, predictions by the China-resident group were also inaccurate for a highly familiar domain: local bus route lengths. Focusing on bus routes, two follow-up experiments with Australia-resident groups clarified the knowledge and strategies that people draw upon, plus important determinants of accurate predictions. For unfamiliar domains, people appear to rely on extrapolating from (not simply directly applying) related knowledge. However, we show that people's predictions are subject to two sources of error: in the estimation of quantities in a familiar domain and extension to plausible values in an unfamiliar domain. We propose that the key to successful predictions is not simply domain experience itself, but explicit experience of relevant quantities.
Publisher: Elsevier BV
Date: 03-1984
DOI: 10.1016/0001-6918(84)90065-9
Abstract: Photodynamic therapy (PDT) using 5-aminolevulinic acid (ALA) has previously shown promising results in cancerous cell destruction. The present study was conducted to evaluate the efficacy of this treatment option on oral epithelial dysplasia in Wistar rats. Furthermore, microscopic effects of systemic versus topical administration of ALA before laser illumination was assessed. Thirty male Wistar rats (200- 250 grams) were used in the present study. Tongue dysplasia was induced by a daily delivery of a 20 ppm solution of 4-nitroquinoline -1- oxide (4NQO) for 3 months. Then, rats were ided into 3 groups of 10 including, group 1 that was received systemic ALA-based PDT (30 mg/kg ALA), group 2 that was received topical ALA-based PDT (20% ALA solution) and group 3 (control) which was left untreated. Tongue specimens were fixed for histopathological evaluation and dysplasia was graded at microscopic level. Data was compared between various treatment groups using Mann Whitney test ( The rate of atypical dysplastic cells was decreased significantly in both topical ( It seems that ALA mediated PDT is an effective treatment option for the destruction of dysplastic cells. However, the extent of this effect depends on the mode of ALA administration before light illumination.
Publisher: Elsevier BV
Date: 06-2020
Publisher: Frontiers Media SA
Date: 20-12-2019
Publisher: American Psychological Association (APA)
Date: 04-2017
DOI: 10.1037/XLM0000323
Abstract: It is sometimes supposed that category learning involves competing explicit and procedural systems, with only the former reliant on working memory capacity (WMC). In 2 experiments participants were trained for 3 blocks on both filtering (often said to be learned explicitly) and condensation (often said to be learned procedurally) category structures. Both experiments (total N = 160) demonstrated that participants with higher WMC tended to be more accurate in condensation tasks, but not less accurate in filtering tasks. Furthermore, state-trace analysis did not find a differential influence of WMC on performance in these tasks. Finally, inspection of the mixture of response strategies at play across the 2 conditions and 3 blocks showed only a minor influence of WMC, and then only on later training blocks. The results provide no support for the existence of a "system" of category learning that is independent of working memory and are instead consistent with most single-system interpretations of category learning. (PsycINFO Database Record
Publisher: American Psychological Association (APA)
Date: 09-2018
DOI: 10.1037/XLM0000527
Abstract: Three-experiments examined the number of qualitatively different processing dimensions needed to account for inductive and deductive reasoning. In each study, participants were presented with arguments that varied in logical validity and consistency with background knowledge (believability), and evaluated them according to deductive criteria (whether the conclusion was necessarily true given the premises) or inductive criteria (whether the conclusion was plausible given the premises). We examined factors including working memory load (Experiments 1 and 2), in idual working memory capacity (Experiments 1 and 2), and decision time (Experiment 3), which according to dual-processing theories, modulate the contribution of heuristic and analytic processes to reasoning. A number of empirical dissociations were found. Argument validity affected deduction more than induction. Argument believability affected induction more than deduction. Lower working memory capacity reduced sensitivity to argument validity and increased sensitivity to argument believability, especially under induction instructions. Reduced decision time led to decreased sensitivity to argument validity. State-trace analyses of each experiment, however, found that only a single underlying dimension was required to explain patterns of inductive and deductive judgments. These results show that the dissociations, which have traditionally been seen as supporting dual-processing models of reasoning, are consistent with a single-process model that assumes a common evidentiary scale for induction and deduction. (PsycINFO Database Record
Publisher: MIT Press - Journals
Date: 12-2011
DOI: 10.1162/JOCN_A_00092
Abstract: In the present study, items pre-exposed in a familiarization series were included in a list discrimination task to manipulate memory strength. At test, participants were required to discriminate strong targets and strong lures from weak targets and new lures. This resulted in a concordant pattern of increased “old” responses to strong targets and lures. Model estimates attributed this pattern to either equivalent increases in memory strength across the two types of items (unequal variance signal detection model) or equivalent increases in both familiarity and recollection (dual process signal detection [DPSD] model). Hippoc al activity associated with strong targets and lures showed equivalent increases compared with missed items. This remained the case when analyses were restricted to high-confidence responses considered by the DPSD model to reflect predominantly recollection. A similar pattern of activity was observed in parahippoc al cortex for high-confidence responses. The present results are incompatible with “noncriterial” or “false” recollection being reflected solely in inflated DPSD familiarity estimates and support a positive correlation between hippoc al activity and memory strength irrespective of the accuracy of list discrimination, consistent with the unequal variance signal detection model account.
Publisher: Informa UK Limited
Date: 11-1997
Publisher: Springer Science and Business Media LLC
Date: 16-05-2023
DOI: 10.1007/S42113-022-00129-2
Abstract: Statistical modeling is generally meant to describe patterns in data in service of the broader scientific goal of developing theories to explain those patterns. Statistical models support meaningful inferences when models are built so as to align parameters of the model with potential causal mechanisms and how they manifest in data. When statistical models are instead based on assumptions chosen by default, attempts to draw inferences can be uninformative or even paradoxical—in essence, the tail is trying to wag the dog. These issues are illustrated by van Doorn et al. (this issue) in the context of using Bayes Factors to identify effects and interactions in linear mixed models. We show that the problems identified in their applications (along with other problems identified here) can be circumvented by using priors over inherently meaningful units instead of default priors on standardized scales. This case study illustrates how researchers must directly engage with a number of substantive issues in order to support meaningful inferences, of which we highlight two: The first is the problem of coordination , which requires a researcher to specify how the theoretical constructs postulated by a model are functionally related to observable variables. The second is the problem of generalization , which requires a researcher to consider how a model may represent theoretical constructs shared across similar but non-identical situations, along with the fact that model comparison metrics like Bayes Factors do not directly address this form of generalization. For statistical modeling to serve the goals of science, models cannot be based on default assumptions, but should instead be based on an understanding of their coordination function and on how they represent causal mechanisms that may be expected to generalize to other related scenarios.
Publisher: Elsevier BV
Date: 08-2018
Publisher: American Psychological Association (APA)
Date: 2004
Publisher: American Psychological Association (APA)
Date: 2008
Publisher: Wiley
Date: 11-2008
DOI: 10.1002/ACP.1403
Publisher: Frontiers Media SA
Date: 2014
Publisher: Springer Science and Business Media LLC
Date: 1999
Publisher: Cambridge University Press (CUP)
Date: 2019
DOI: 10.1017/S0140525X1900181X
Abstract: Bastin et al. propose a dual-process model to understand memory deficits. However, results from state-trace analysis have suggested a single underlying variable in behavioral and neural data. We advocate the usage of unidimensional models that are supported by data and have been successful in understanding memory deficits and in linking to neural data.
Publisher: Springer Science and Business Media LLC
Date: 05-2014
DOI: 10.3758/S13423-014-0637-Y
Abstract: Ashby (2014) has argued that state-trace analysis (STA) is not an appropriate tool for assessing the number of cognitive systems, because it fails in its primary goal of distinguishing single-parameter and multiple-parameter models. We show that this is based on a misunderstanding of the logic of STA, which depends solely on nearly universal assumptions about psychological measurement and clearly supersedes inferences based on functional dissociation and the analysis of interactions in analyses of variance. We demonstrate that STA can be used to draw inferences concerning the number of latent variables mediating the effects of a set of independent variables on a set of dependent variables. We suggest that STA is an appropriate tool to use when making arguments about the number of cognitive systems that must be posited to explain behavior. However, no statistical or inferential procedure is able to provide definitive answers to questions about the number of cognitive systems, simply because the concept of a "system" is not defined in an appropriate way.
Publisher: Elsevier
Date: 2011
Publisher: Elsevier BV
Date: 08-2008
DOI: 10.1016/J.TICS.2008.04.009
Abstract: Cognitive science is replete with fertile and forceful debates about the need for one or more underlying mental processes or systems to explain empirical observations. Such debates can be found in many areas, including learning, memory, categorization, reasoning and decision-making. Multiple-process models are often advanced on the basis of dissociations in data. We argue and illustrate that using dissociation logic to draw conclusions about the dimensionality of data is flawed. We propose that a more widespread adoption of 'state-trace analysis'--an approach that overcomes these flaws--could lead to a re-evaluation of the need for multiple-process models and to a re-appraisal of how these models should be formulated and tested.
Publisher: Springer International Publishing
Date: 2018
Publisher: Elsevier BV
Date: 12-2021
Publisher: Springer Science and Business Media LLC
Date: 30-10-2015
DOI: 10.1038/SREP15831
Abstract: People frequently change their preferences for options of gambles which they play once compared to those they play multiple times. In general, preferences for repeated play gambles are more consistent with the expected values of the options. According to the one-process view, the change in preference is due to a change in the structure of the gamble that is relevant to decision making. According to the two-process view, the change is attributable to a shift in the decision making strategy that is used. To adjudicate between these two theories, we asked participants to choose between gambles played once or 100 times and to choose between them based on their expected value. Consistent with the two-process theory, we found a set of brain regions that were sensitive to the extent of behavioral change between single and aggregated play and also showed significant (de)activation in the expected value choice task. These results support the view that people change their decision making strategies for risky choice considered once or multiple times.
Publisher: American Psychological Association (APA)
Date: 11-2021
DOI: 10.1037/REV0000288
Publisher: American Psychological Association (APA)
Date: 04-2019
DOI: 10.1037/XLM0000753
Abstract: Four experiments examined the claims that people can intuitively assess the logical validity of arguments, and that qualitatively different reasoning processes drive intuitive and explicit validity assessments. In each study participants evaluated arguments varying in validity and believability using either deductive criteria (logic task) or via an intuitive, affective response (liking task). Experiment 1 found that people are sensitive to argument validity on both tasks, with valid arguments receiving higher liking as well as higher deductive ratings than invalid arguments. However, the claim that this effect is driven by logical intuitions was challenged by the finding that sensitivity to validity in both liking and logic tasks was affected in similar ways by manipulations of concurrent memory load (Experiments 1 and 2) and variations in in idual working memory capacity (Experiments 3 and 4). In both tasks better discrimination between valid and invalid arguments was found when more working memory resources were available. Formal signal detection models of reasoning were tested against the experimental data using signed difference analysis (Stephens, Dunn, & Hayes, 2018b). A single-process reasoning model which assumes that argument evaluation in both logic and liking tasks involves a single latent dimension for assessing argument strength but different response criteria for each task, was found to be consistent with the data from each experiment (as were some dual-process models). The experimental and modeling results confirm that people are sensitive to argument validity in both explicit logic and affect rating tasks, but that these results can be explained by a single underlying reasoning process. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
Publisher: SAGE Publications
Date: 17-09-2019
Abstract: Scientific advances across a range of disciplines hinge on the ability to make inferences about unobservable theoretical entities on the basis of empirical data patterns. Accurate inferences rely on both discovering valid, replicable data patterns and accurately interpreting those patterns in terms of their implications for theoretical constructs. The replication crisis in science has led to widespread efforts to improve the reliability of research findings, but comparatively little attention has been devoted to the validity of inferences based on those findings. Using an ex le from cognitive psychology, we demonstrate a blinded-inference paradigm for assessing the quality of theoretical inferences from data. Our results reveal substantial variability in experts’ judgments on the very same data, hinting at a possible inference crisis.
Publisher: American Psychological Association (APA)
Date: 03-2018
DOI: 10.1037/REV0000088
Abstract: Single-process accounts of reasoning propose that the same cognitive mechanisms underlie inductive and deductive inferences. In contrast, dual-process accounts propose that these inferences depend upon 2 qualitatively different mechanisms. To distinguish between these accounts, we derived a set of single-process and dual-process models based on an overarching signal detection framework. We then used signed difference analysis to test each model against data from an argument evaluation task, in which induction and deduction judgments are elicited for sets of valid and invalid arguments. Three data sets were analyzed: data from Singmann and Klauer (2011), a database of argument evaluation studies, and the results of an experiment designed to test model predictions. Of the large set of testable models, we found that almost all could be rejected, including all 2-dimensional models. The only testable model able to account for all 3 data sets was a model with 1 dimension of argument strength and independent decision criteria for induction and deduction judgments. We conclude that despite the popularity of dual-process accounts, current results from the argument evaluation task are best explained by a single-process account that incorporates separate decision thresholds for inductive and deductive inferences. (PsycINFO Database Record
Publisher: Wiley
Date: 22-09-2016
DOI: 10.1111/DESC.12469
Abstract: This experiment examined single-process and dual-process accounts of the development of visual recognition memory. The participants, 6-7-year-olds, 9-10-year-olds and adults, were presented with a list of pictures which they encoded under shallow or deep conditions. They then made recognition and confidence judgments about a list containing old and new items. We replicated the main trends reported by Ghetti and Angelini () in that recognition hit rates increased from 6 to 9 years of age, with larger age changes following deep than shallow encoding. Formal versions of the dual-process high threshold signal detection model and several single-process models (equal variance signal detection, unequal variance signal detection, mixture signal detection) were fit to the developmental data. The unequal variance and mixture signal detection models gave a better account of the data than either of the other models. A state-trace analysis found evidence for only one underlying memory process across the age range tested. These results suggest that single-process memory models based on memory strength are a viable alternative to dual-process models for explaining memory development.
Publisher: MDPI AG
Date: 08-11-2016
Publisher: Wiley
Date: 1980
Start Date: 2008
End Date: 12-2012
Amount: $205,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 07-2019
End Date: 06-2023
Amount: $440,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 04-2005
End Date: 12-2008
Amount: $167,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 03-2016
End Date: 03-2022
Amount: $176,200.00
Funder: Australian Research Council
View Funded ActivityStart Date: 2011
End Date: 12-2016
Amount: $210,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 03-2013
End Date: 03-2019
Amount: $293,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 2008
End Date: 03-2012
Amount: $165,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 01-2004
End Date: 12-2007
Amount: $117,000.00
Funder: Australian Research Council
View Funded Activity