ORCID Profile
0000-0002-5702-6450
Current Organisation
Macquarie University
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
In Research Link Australia (RLA), "Research Topics" refer to ANZSRC FOR and SEO codes. These topics are either sourced from ANZSRC FOR and SEO codes listed in researchers' related grants or generated by a large language model (LLM) based on their publications.
Psychology | Sensory Processes, Perception and Performance | Industrial and Organisational Psychology | Transport Engineering | Biological psychology | Learning, Memory, Cognition And Language | Cognitive Science Not Elsewhere Classified | Sensory Processes, Perception And Performance | Sensory Systems | Cognitive neuroscience | Cognitive Science not elsewhere classified | Psychological Methodology, Design and Analysis | Knowledge Representation and Machine Learning
Expanding Knowledge in Psychology and Cognitive Sciences | Rail Safety | Road Safety | School/Institution Community and Environment | Behavioural and cognitive sciences | Air Safety | Water Safety |
Publisher: Elsevier BV
Date: 2006
DOI: 10.1016/S0010-9452(08)70347-2
Abstract: For in iduals with grapheme-colour synaesthesia, letters, numbers and words elicit vivid and highly consistent colour experiences. A critical question in determining the mechanisms underlying the phenomenon is whether synaesthetic colours arise early in visual processing, prior to the allocation of focused attention, or at some later stage following explicit recognition of the inducing form. If the synaesthetic colour elicited by an achromatic target emerges early in visual processing, then the target should be relatively easy to find in an array of achromatic distractor items, provided the target and distractors elicit different synaesthetic colours. Here we present data from 14 grapheme-colour synaesthetes and 14 matched non-synaesthetic controls, each of whom performed a visual search task in which a target digit was distinguished from surrounding distractors either by its unique synaesthetic colour or by its unique display colour. Participants searched displays of 8, 16 or 24 items for a specific target. In the chromatic condition, target and distractor digits were presented in different colours (e.g., a yellow '2' amongst blue '5's). In the achromatic condition, all digits in the display were black, but targets elicited a different synaesthetic colour from that induced by the distractors. Both synaesthetes and controls showed the expected efficient (pop-out) search slopes when the target was defined by a unique display colour. In contrast, search slopes for both groups were equally inefficient when the target and distractors were achromatic, despite eliciting distinct colours for the synaesthetes under normal viewing conditions. These results indicate that, at least for the majority of in iduals, synaesthetic colours do not arise early enough in visual processing to guide or attract focal attention. Our findings are consistent with the hypothesis that graphemic inducers must be selectively attended to elicit their synaesthetic colours.
Publisher: Society for Neuroscience
Date: 23-07-2020
Publisher: Wiley
Date: 08-2015
DOI: 10.1002/JTS.22030
Abstract: Although the experience of vicarious sensations when observing another in pain have been described post utation, the underlying mechanisms are unknown. We investigated whether vicarious sensations are related to posttraumatic stress disorder (PTSD) symptoms and chronic pain. In Study 1, 236 utees completed questionnaires about phantom limb phenomena and vicarious sensations to both innocuous and painful sensory experiences of others. There was a 10.2% incidence of vicarious sensations, which was significantly more prevalent in utees reporting PTSD-like experiences, particularly increased arousal and reexperiencing the event that led to utation (φ = .16). In Study 2, 63 utees completed the Empathy for Pain Scale and PTSD Checklist-Civilian Version. Cluster analyses revealed 3 groups: 1 group did not experience vicarious pain or PTSD symptoms, and 2 groups were vicarious pain responders, but only 1 had increased PTSD symptoms. Only the latter group showed increased chronic pain severity compared with the nonresponder group (p = .025) with a moderate effect size (r = .35). The findings from both studies implicated an overlap, but also ergence, between PTSD symptoms and vicarious pain reactivity post utation. Maladaptive mechanisms implicated in severe chronic pain and physical reactivity posttrauma may increase the incidence of vicarious reactivity to the pain of others.
Publisher: Cold Spring Harbor Laboratory
Date: 24-05-2019
DOI: 10.1101/647594
Abstract: Body ownership relies on spatiotemporal correlations between multisensory signals and visual cues specifying oneself such as body form and orientation. The mechanism for the integration of bodily signals remains unclear. One approach to model multisensory integration that has been influential in the multisensory literature is Bayesian causal inference. This specifies that the brain integrates spatial and temporal signals coming from different modalities when it infers a common cause for inputs. As an ex le, the rubber hand illusion shows that visual form and orientation cues can promote the inference of a common cause (one’s body) leading to spatial integration shown by a proprioceptive drift of the perceived location of the real hand towards the rubber hand. Recent studies investigating the effect of visual cues on temporal integration , however, have led to conflicting findings. These could be due to task differences, variation in ecological validity of stimuli and/or small s les. In this pre-registered study, we investigated the influence of visual information on temporal integration using a visuo-tactile temporal order judgement task with realistic stimuli and a sufficiently large s le determined by Bayesian analysis. Participants viewed videos of a touch being applied to plausible or implausible visual stimuli for one’s hand (hand oriented plausibly, hand rotated 180 degrees, or a sponge) while also being touched at varying stimulus onset asynchronies. Participants judged which stimulus came first: viewed or felt touch. Results show that visual cues do not modulate visuo-tactile temporal order judgements. This is not in line with the idea that bodily signals indicating oneself influence the integration of multisensory signals in the temporal domain. The current study emphasises the importance of rigour in our methodologies and analyses to advance the understanding of how properties of multisensory events affect the encoding of temporal information in the brain.
Publisher: Springer Science and Business Media LLC
Date: 15-03-2018
DOI: 10.1007/S00221-018-5232-4
Abstract: Tracking one's own body is essential for environmental interaction, and involves integrating multisensory cues with stored information about the body's typical features. Exactly how multisensory information is integrated in own-body perception is still unclear. For ex le, Ide and Hidaka (Exp Brain Res 228:43-50, 2013) found that participants made less precise visuo-tactile temporal order judgments (TOJ) when viewing hands in a plausible orientation (upright typical for one's own hand) compared to an implausible orientation (rotated 180°). This suggests that viewing one's own body relaxes the precision for perceived visuo-tactile synchrony. In contrast, visuo-proprioceptive research shows improvements for multisensory temporal perception near one's own body in asynchrony detection tasks, implying an increase in precision. Hence, it is unclear whether viewed hand orientation generally modulates the ability to detect small asynchronies between vision and touch, or if this effect is specific to TOJ tasks. We investigated whether viewed hand orientation affects detection of visuo-tactile asynchrony. In two experiments, participants viewed model hands in anatomically plausible or implausible orientations. In one experiment, we stroked the hands to induce the rubber hand illusion. Participants were asked to detect short delays (40-280 ms) between vision (an LED flash on the model hand) and touch (a tap to fingertip of the participant's hidden hand) in a two-interval forced-choice task. Bayesian analyses show that our data provide strong evidence that viewed hand orientation does not affect visuo-tactile asynchrony detection. This study suggests the mechanisms for fine-grained time perception differ between visuo-tactile and visuo-proprioceptive contexts.
Publisher: Elsevier BV
Date: 2006
DOI: 10.1016/J.NEUROPSYCHOLOGIA.2006.06.024
Abstract: The experience of colour is a core element of human vision. Colours provide important symbolic and contextual information not conveyed by form alone. Moreover, the experience of colour can arise without external stimulation. For many people, visual memories are rich with colour imagery. In the unusual phenomenon of grapheme-colour synaesthesia, achromatic forms such as letters, words and numbers elicit vivid experiences of colour. Few studies, however, have examined the neural correlates of such internally generated colour experiences. We used functional magnetic resonance imaging (fMRI) to compare patterns of cortical activity for the perception of external coloured stimuli and internally generated colours in a group of grapheme-colour synaesthetes and matched non-synaesthetic controls. In a voluntary colour imagery task, both synaesthetes and non-synaesthetes made colour judgements on objects presented as grey scale photographs. In a synaesthetic colour task, we presented letters that elicited synaesthetic colours, and asked participants to perform a localisation task. We assessed the neural activity underpinning these two different forms of colour experience that occur in the absence of chromatic sensory input. In both synaesthetes and non-synaesthetes, voluntary colour imagery activated the colour-selective area, V4, in the right hemisphere. In contrast, the synaesthetic colour task resulted in unique activity for synaesthetes in the left medial lingual gyrus, an area previously implicated in tasks involving colour knowledge. Our data suggest that internally generated colour experiences recruit brain regions specialised for colour perception, with striking differences between voluntary colour imagery and synaesthetically induced colours.
Publisher: SAGE Publications
Date: 09-2012
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 09-2015
DOI: 10.1167/15.12.609
Publisher: Informa UK Limited
Date: 26-06-2015
DOI: 10.1080/17588928.2015.1056519
Abstract: Digit-color synesthetes report experiencing colors when perceiving letters and digits. The conscious experience is typically unidirectional (e.g., digits elicit colors but not vice versa) but recent evidence shows subtle bidirectional effects. We examined whether short-term memory for colors could be affected by the order of presentation reflecting more or less structure in the associated digits. We presented a stream of colored squares and asked participants to report the colors in order. The colors matched each synesthete's colors for digits 1-9 and the order of the colors corresponded either to a sequence of numbers (e.g., [red, green, blue] if 1 = red, 2 = green, 3 = blue) or no systematic sequence. The results showed that synesthetes recalled sequential color sequences more accurately than pseudo-randomized colors, whereas no such effect was found for the non-synesthetic controls. Synesthetes did not differ from non-synesthetic controls in recall of color sequences overall, providing no evidence of a general advantage in memory for serial recall of colors.
Publisher: Springer Science and Business Media LLC
Date: 17-05-2021
DOI: 10.1038/S42003-021-02109-X
Abstract: Dorsolateral prefrontal cortex (dlPFC) is proposed to drive brain-wide focus by biasing processing in favour of task-relevant information. A longstanding debate concerns whether this is achieved through enhancing processing of relevant information and/or by inhibiting irrelevant information. To address this, we applied transcranial magnetic stimulation (TMS) during fMRI, and tested for causal changes in information coding. Participants attended to one feature, whilst ignoring another feature, of a visual object. If dlPFC is necessary for facilitation, disruptive TMS should decrease coding of attended features. Conversely, if dlPFC is crucial for inhibition, TMS should increase coding of ignored features. Here, we show that TMS decreases coding of relevant information across frontoparietal cortex, and the impact is significantly stronger than any effect on irrelevant information, which is not statistically detectable. This provides causal evidence for a specific role of dlPFC in enhancing task-relevant representations and demonstrates the cognitive-neural insights possible with concurrent TMS-fMRI-MVPA.
Publisher: Elsevier BV
Date: 04-2015
DOI: 10.1016/J.NEUROIMAGE.2014.12.083
Abstract: Selective attention is fundamental for human activity, but the details of its neural implementation remain elusive. One influential theory, the adaptive coding hypothesis (Duncan, 2001, An adaptive coding model of neural function in prefrontal cortex, Nature Reviews Neuroscience 2:820-829), proposes that single neurons in certain frontal and parietal regions dynamically adjust their responses to selectively encode relevant information. This selective representation may in turn support selective processing in more specialized brain regions such as the visual cortices. Here, we use multi-voxel decoding of functional magnetic resonance images to demonstrate selective representation of attended--and not distractor--objects in frontal, parietal, and visual cortices. In addition, we highlight a critical role for task demands in determining which brain regions exhibit selective coding. Strikingly, representation of attended objects in frontoparietal cortex was highest under conditions of high perceptual demand, when stimuli were hard to perceive and coding in early visual cortex was weak. Coding in early visual cortex varied as a function of attention and perceptual demand, while coding in higher visual areas was sensitive to the allocation of attention but robust to changes in perceptual difficulty. Consistent with high-profile reports, peripherally presented objects could also be decoded from activity at the occipital pole, a region which corresponds to the fovea. Our results emphasize the flexibility of frontoparietal and visual systems. They support the hypothesis that attention enhances the multi-voxel representation of information in the brain, and suggest that the engagement of this attentional mechanism depends critically on current task demands.
Publisher: SAGE Publications
Date: 2012
DOI: 10.1068/P7223
Abstract: Several studies have demonstrated reliable cross-modal associations between odours and various visual, auditory, taste, and somatosensory attributes. How these associations arise is not well understood. We examined whether cross-modal associations to odours themselves form distinct groups, and whether these groupings relate to semantic (nameability, familiarity) and perceptual (intensity, irritancy, and hedonics) olfactory attributes. Participants evaluated 20 odours, varying in all of the latter attributes, and reported their visual, auditory, gustatory, and somatosensory associations for each. Significant inter-rater agreement was observed for all modalities except audition, and responses in all modalities were consistent with those obtained on a repeat test session 2 weeks later. Two groups of cross-modal odour associates emerged: one of which was related to the semantic attributes of odours and another which related to their perceptual attributes. The exception was taste, which was significantly associated with both. While these results suggest that both semantic and perceptual mechanisms underpin odour cross-modal matches, the data also point to the importance of hedonics as a further contributing mechanism.
Publisher: Elsevier BV
Date: 06-2021
DOI: 10.1016/J.NEUROIMAGE.2021.117896
Abstract: Humans are fast and accurate when they recognize familiar faces. Previous neurophysiological studies have shown enhanced representations for the dichotomy of familiar vs. unfamiliar faces. As familiarity is a spectrum, however, any neural correlate should reflect graded representations for more vs. less familiar faces along the spectrum. By systematically varying familiarity across stimuli, we show a neural familiarity spectrum using electroencephalography. We then evaluated the spatiotemporal dynamics of familiar face recognition across the brain. Specifically, we developed a novel informational connectivity method to test whether peri-frontal brain areas contribute to familiar face recognition. Results showed that feed-forward flow dominates for the most familiar faces and top-down flow was only dominant when sensory evidence was insufficient to support face recognition. These results demonstrate that perceptual difficulty and the level of familiarity influence the neural representation of familiar faces and the degree to which peri-frontal neural networks contribute to familiar face recognition.
Publisher: SAGE Publications
Date: 2015
DOI: 10.1068/P7699
Abstract: Our cognitive system tends to link auditory pitch with spatial location in a specific manner (ie high-pitched sounds are usually associated with an upper location, and low sounds are associated with a lower location). Recent studies have demonstrated that this cross-modality association biases the allocation of visual attention and affects performance despite the auditory stimuli being irrelevant to the behavioural task. There is, however, a discrepancy between studies in their interpretation of the underlying mechanisms. Whereas we have previously claimed that the pitch-location mapping is mediated by volitional shifts of attention (Chiou & Rich, 2012, Perception, 41, 339–353), other researchers suggest that this cross-modal effect reflects automatic shifts of attention (Mossbridge, Grabowecky, & Suzuki, 2011, Cognition, 121, 133–139). Here we report a series of three experiments examining the effects of perceptual and response-related pressure on the ability of nonpredictive pitch to bias visual attention. We compare it with two control cues: a predictive pitch that triggers voluntary attention shifts and a salient peripheral flash that evokes involuntary shifts. The results show that the effect of nonpredictive pitch is abolished by pressure at either perceptual or response levels. By contrast, the effects of the two control cues remain significant, demonstrating the robustness of informative and perceptually salient stimuli in directing attention. This distinction suggests that, in contexts of high perceptual demand and response pressure, cognitive resources are primarily engaged by the task-relevant stimuli, which effectively prevents uninformative pitch from orienting attention to its cross-modally associated location. These findings are consistent with the hypothesis that the link between pitch and location affects attentional deployment via volitional rather than automatic mechanisms.
Publisher: MIT Press
Date: 05-03-2022
DOI: 10.1162/JOCN_A_01818
Abstract: The human brain is extremely flexible and capable of rapidly selecting relevant information in accordance with task goals. Regions of frontoparietal cortex flexibly represent relevant task information such as task rules and stimulus features when participants perform tasks successfully, but less is known about how information processing breaks down when participants make mistakes. This is important for understanding whether and when information coding recorded with neuroimaging is directly meaningful for behavior. Here, we used magnetoencephalography to assess the temporal dynamics of information processing and linked neural responses with goal-directed behavior by analyzing how they changed on behavioral error. Participants performed a difficult stimulus–response task using two stimulus–response mapping rules. We used time-resolved multivariate pattern analysis to characterize the progression of information coding from perceptual information about the stimulus, cue and rule coding, and finally, motor response. Response-aligned analyses revealed a r ing up of perceptual information before a correct response, suggestive of internal evidence accumulation. Strikingly, when participants made a stimulus-related error, and not when they made other types of errors, patterns of activity initially reflected the stimulus presented, but later reversed, and accumulated toward a representation of the “incorrect” stimulus. This suggests that the patterns recorded at later time points reflect an internally generated stimulus representation that was used to make the (incorrect) decision. These results illustrate the orderly and overlapping temporal dynamics of information coding in perceptual decision-making and show a clear link between neural patterns in the late stages of processing and behavior.
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 17-05-2019
DOI: 10.1167/19.5.17
Abstract: The continuous flash suppression (CFS) task can be used to investigate what limits our capacity to become aware of visual stimuli. In this task, a stream of rapidly changing mask images to one eye initially suppresses awareness for a static target image presented to the other eye. Several factors may determine the breakthrough time from mask suppression, one of which is the overlap in representation of the target/mask categories in higher visual cortex. This hypothesis is based on certain object categories (e.g., faces) being more effective in blocking awareness of other categories (e.g., buildings) than other combinations (e.g., cars/chairs). Previous work found mask effectiveness to be correlated with category-pair high-level representational similarity. As the cortical representations of hands and tools overlap, these categories are ideal to test this further as well as to examine alternative explanations. For our CFS experiments, we predicted longer breakthrough times for hands/tools compared to other pairs due to the reported cortical overlap. In contrast, across three experiments, participants were generally faster at detecting targets masked by hands or tools compared to other mask categories. Exploring low-level explanations, we found that the category average for edges (e.g., hands have less detail compared to cars) was the best predictor for the data. This low-level bottleneck could not completely account for the specific category patterns and the hand/tool effects, suggesting there are several levels at which object category-specific limits occur. Given these findings, it is important that low-level bottlenecks for visual awareness are considered when testing higher-level hypotheses.
Publisher: Springer Science and Business Media LLC
Date: 24-10-2020
DOI: 10.3758/S13414-019-01867-5
Abstract: The world around us is filled with complex objects, full of color, motion, shape, and texture, and these features seem to be represented separately in the early visual system. Anne Treisman pointed out that binding these separate features together into coherent conscious percepts is a serious challenge, and she argued that selective attention plays a critical role in this process. Treisman also showed that, consistent with this view, outside the focus of attention we suffer from illusory conjunctions: misperceived pairings of features into objects. Here we used Treisman's logic to study the structure of pre-attentive representations of multipart, multicolor objects, by exploring the patterns of illusory conjunctions that arise outside the focus of attention. We found consistent evidence of some pre-attentive binding of colors to their parts, and weaker evidence of binding multiple colors of the same object. The extent to which such hierarchical binding occurs seems to depend on the geometric structure of multipart objects: Objects whose parts are easier to separate seem to exhibit greater pre-attentive binding. Together, these results suggest that representations outside the focus of attention are not entirely a "shapeless bundles of features," but preserve some meaningful object structure.
Publisher: Elsevier BV
Date: 2012
DOI: 10.1016/J.NEUBIOREV.2011.09.006
Abstract: Recent research suggests the observation or imagination of somatosensory stimulation in another (e.g., touch or pain) can induce a similar somatosensory experience in oneself. Some researchers have presented this experience as a type of synaesthesia, whereas others consider it an extreme experience of an otherwise normal perception. Here, we present an argument that these descriptions are not mutually exclusive. They may describe the extreme version of the normal process of understanding somatosensation in others. It becomes synaesthesia, however, when this process results in a conscious experience comparable to the observed person's state. We describe these experiences as 'mirror-sensory synaesthesia' a type of synaesthesia identified by its distinct social component where the induced synaesthetic experience is a similar sensory experience to that perceived in another person. Through the operationalisation of this intriguing experience as synaesthesia, existing neurobiological models of synaesthesia can be used as a framework to explore how mechanisms may act upon social cognitive processes to produce conscious experiences similar to another person's observed state.
Publisher: Ovid Technologies (Wolters Kluwer Health)
Date: 10-2003
DOI: 10.1097/00001756-200310060-00007
Abstract: Colour-graphemic synaesthetes experience vivid colours when reading letters, digits and words. We examined the effect of stimulus competition and attention on these unusual colour experiences in 14 synaesthetes and 14 non-synaesthetic controls. Participants named the colour of hierarchical local-global stimuli in which letters at each level elicited synaesthetic colours that were congruent or incongruent with the display colour. Synaesthetes were significantly slower to name display colours when either level was incongruent than when both levels were congruent. This effect was significantly reduced when synaesthetes focused attention on one level while the congruency of letters at the ignored level was varied. These findings suggest that competition between multiple inducers and mechanisms of voluntary attention influence colour-graphemic synaesthesia.
Publisher: Elsevier BV
Date: 10-2013
DOI: 10.1016/J.NEUROIMAGE.2013.04.108
Abstract: Visual information processing involves the integration of stimulus and goal-driven information, requiring neuronal communication. Gamma synchronisation is linked to neuronal communication, and is known to be modulated in visual cortex both by stimulus properties and voluntarily-directed attention. Stimulus-driven modulations of gamma activity are particularly associated with early visual areas such as V1, whereas attentional effects are generally localised to higher visual areas such as V4. The absence of a gamma increase in early visual cortex is at odds with robust attentional enhancements found with other measures of neuronal activity in this area. Here we used magnetoencephalography (MEG) to explore the effect of spatial attention on gamma activity in human early visual cortex using a highly effective gamma-inducing stimulus and strong attentional manipulation. In separate blocks, subjects tracked either a parafoveal grating patch that induced gamma activity in contralateral medial visual cortex, or a small line at fixation, effectively attending away from the gamma-inducing grating. Both items were always present, but rotated unpredictably and independently of each other. The rotating grating induced gamma synchronisation in medial visual cortex at 30-70 Hz, and in lateral visual cortex at 60-90 Hz, regardless of whether it was attended. Directing spatial attention to the grating increased gamma synchronisation in medial visual cortex, but only at 60-90 Hz. These results suggest that the generally found increase in gamma activity by spatial attention can be localised to early visual cortex in humans, and that stimulus and goal-driven modulations may be mediated at different frequencies within the gamma range.
Publisher: eLife Sciences Publications, Ltd
Date: 08-04-2021
DOI: 10.7554/ELIFE.60563
Abstract: There are many monitoring environments, such as railway control, in which lapses of attention can have tragic consequences. Problematically, sustained monitoring for rare targets is difficult, with more misses and longer reaction times over time. What changes in the brain underpin these ‘vigilance decrements’? We designed a multiple-object monitoring (MOM) paradigm to examine how the neural representation of information varied with target frequency and time performing the task. Behavioural performance decreased over time for the rare target (monitoring) condition, but not for a frequent target (active) condition. There was subtle evidence of this also in the neural decoding using Magnetoencephalography: for one time-window (of 80ms) coding of critical information declined more during monitoring versus active conditions. We developed new analyses that can predict behavioural errors from the neural data more than a second before they occurred. This facilitates pre-empting behavioural errors due to lapses in attention and provides new insight into the neural correlates of vigilance decrements.
Publisher: Cold Spring Harbor Laboratory
Date: 30-06-2020
DOI: 10.1101/2020.06.29.178970
Abstract: There are many monitoring environments, such as railway control, in which lapses of attention can have tragic consequences. Problematically, sustained monitoring for rare targets is difficult, with more misses and longer reaction times over time. What changes in the brain underpin these “vigilance decrements”? We designed a multiple-object monitoring (MOM) paradigm to examine how the neural representation of information varied with target frequency and time performing the task. Behavioural performance decreased over time for the rare target (monitoring) condition, but not for a frequent target (active) condition. This was mirrored in the neural results: there was weaker coding of critical information during monitoring versus active conditions. We developed new analyses that can predict behavioural errors from the neural data more than a second before they occurred. This paves the way for pre-empting behavioural errors due to lapses in attention and provides new insight into the neural correlates of vigilance decrements.
Publisher: Center for Open Science
Date: 12-06-2020
Abstract: Rewards exert a deep influence on our cognition and behaviour. Here, we used a paradigm in which reward information was provided at either encoding or retrieval of a brief, masked stimulus to show that reward can also rapidly modulate early neural processing of visual information, prior to consciousness. Experiment 1 showed enhanced response accuracy when a to-be-encoded grating signalled high reward relative to low reward, but only when the grating was presented very briefly and participants were not consciously aware of it. Experiment 2 showed no difference in response accuracy when reward information was instead provided at the stage of retrieval, ruling out an explanation of the reward-modulation effect in terms of differences in motivated retrieval. Taken together, our findings provide the first behavioural evidence for a rapid reward-modulation of visual perception, which does not seem to require consciousness.
Publisher: SPIE-Intl Soc Optical Eng
Date: 14-07-2021
Publisher: SAGE Publications
Date: 16-08-2010
Abstract: In this article, we report that in visual search, desaturated reddish targets are much easier to find than other desaturated targets, even when perceptual differences between targets and distractors are carefully equated. Observers searched for desaturated targets among mixtures of white and saturated distractors. Reaction times were hundreds of milliseconds faster for the most effective (reddish) targets than for the least effective (purplish) targets. The advantage for desaturated reds did not reflect an advantage for the lexical category “pink,” because reaction times did not follow named color categories. Many pink stimuli were not found quickly, and many quickly found stimuli were not labeled “pink.” Other possible explanations (e.g., linear-separability effects) also failed. Instead, we propose that guidance of visual search for desaturated colors is based on a combination of low-level color-opponent signals that is different from the combinations that produce perceived color. We speculate that this guidance might reflect a specialization for human skin.
Publisher: Elsevier BV
Date: 06-2013
DOI: 10.1016/J.CORTEX.2012.04.006
Abstract: Our brain constantly integrates signals across different senses. Auditory-visual synaesthesia is an unusual form of cross-modal integration in which sounds evoke involuntary visual experiences. Previous research primarily focuses on synaesthetic colour, but little is known about non-colour synaesthetic visual features. Here we studied a group of synaesthetes for whom sounds elicit consistent visual experiences of coloured 'geometric objects' located at specific spatial location. Changes in auditory pitch alter the brightness, size, and spatial height of synaesthetic experiences in a systematic manner resembling the cross-modal correspondences of non-synaesthetes, implying synaesthesia may recruit cognitive/neural mechanisms for 'normal' cross-modal processes. To objectively assess the impact of synaesthetic objects on behaviour, we devised a multi-feature cross-modal synaesthetic congruency paradigm and asked participants to perform speeded colour or shape discrimination. We found irrelevant sounds influenced performance, as quantified by congruency effects, demonstrating that synaesthetes were not able to suppress their synaesthetic experiences even when these were irrelevant for the task. Furthermore, we found some evidence for task-specific effects consistent with feature-based attention acting on the constituent features of synaesthetic objects: synaesthetic colours appeared to have a stronger impact on performance than synaesthetic shapes when synaesthetes attended to colour, and vice versa when they attended to shape. We provide the first objective evidence that visual synaesthetic experience can involve multiple features forming object-like percepts and suggest that each feature can be selected by attention despite it being internally generated. These findings suggest theories of the brain mechanisms of synaesthesia need to incorporate a broader neural network underpinning multiple visual features, perceptual knowledge, and feature integration, rather than solely focussing on colour-sensitive areas.
Publisher: Springer Science and Business Media LLC
Date: 03-2019
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 11-2008
DOI: 10.1167/8.15.15
Publisher: American Psychological Association (APA)
Date: 08-2017
DOI: 10.1037/XHP0000402
Abstract: For digit-color synaesthetes, digits elicit vivid experiences of color that are highly consistent for each in idual. The conscious experience of synaesthesia is typically unidirectional: Digits evoke colors but not vice versa. There is an ongoing debate about whether synaesthetes have a memory advantage over non-synaesthetes. One key question in this debate is whether synaesthetes have a general superiority or whether any benefit is specific to a certain type of material. Here, we focus on immediate serial recall and ask digit-color synaesthetes and controls to memorize digit and color sequences. We developed a sensitive staircase method manipulating presentation duration to measure participants' serial recall of both overlearned and novel sequences. Our results show that synaesthetes can activate digit information to enhance serial memory for color sequences. When color sequences corresponded to ascending or descending digit sequences, synaesthetes encoded these sequences at a faster rate than their non-synaesthetes counterparts and faster than non-structured color sequences. However, encoding color sequences is approximately 200 ms slower than encoding digit sequences directly, independent of group and condition, which shows that the translation process is time consuming. These results suggest memory advantages in synaesthesia require a modified dual-coding account, in which secondary (synaesthetically linked) information is useful only if it is more memorable than the primary information to be recalled. Our study further shows that duration thresholds are a sensitive method to measure subtle differences in serial recall performance. (PsycINFO Database Record
Publisher: Elsevier BV
Date: 10-2019
DOI: 10.1016/J.NEUROIMAGE.2019.06.062
Abstract: Colour is a defining feature of many objects, playing a crucial role in our ability to rapidly recognise things in the world around us and make categorical distinctions. For ex le, colour is a useful cue when distinguishing lemons from limes or blackberries from raspberries. That means our representation of many objects includes key colour-related information. The question addressed here is whether the neural representation activated by knowing that something is red is the same as that activated when we actually see something red, particularly in regard to timing. We addressed this question using neural timeseries (magnetoencephalography, MEG) data to contrast real colour perception and implied object colour activation. We applied multivariate pattern analysis (MVPA) to analyse the brain activation patterns evoked by colour accessed via real colour perception and implied colour activation. Applying MVPA to MEG data allows us here to focus on the temporal dynamics of these processes. Male and female human participants (N = 18) viewed isoluminant red and green shapes and grey-scale, luminance-matched pictures of fruits and vegetables that are red (e.g., tomato) or green (e.g., kiwifruit) in nature. We show that the brain activation pattern evoked by real colour perception is similar to implied colour activation, but that this pattern is instantiated at a later time. These results suggest that a common colour representation can be triggered by activating object representations from memory and perceiving colours.
Publisher: Elsevier BV
Date: 09-2018
DOI: 10.1016/J.CORTEX.2018.05.009
Abstract: When interacting with objects, we have to represent their location relative to our bodies. To facilitate bodily reactions, location may be encoded in the brain not just with respect to the retina (retinotopic reference frame), but also in relation to the head, trunk or arm (collectively spatiotopic reference frames). While spatiotopic reference frames for location encoding can be found in brain areas for action planning, such as parietal areas, there is debate about the existence of spatiotopic reference frames in higher-level occipitotemporal visual areas. In an extensive multi-voxel pattern analysis (MVPA) fMRI study using faces, headless bodies and scenes stimuli, Golomb and Kanwisher (2012) did not find evidence for spatiotopic reference frames in shape-selective occipitotemporal cortex. This finding is important for theories of how stimulus location is encoded in the brain. It is possible, however, that their failure to find spatiotopic reference frames is related to their stimuli: we typically do not manipulate faces, headless bodies or scenes. It is plausible that we only represent body-centred location when viewing objects that are typically manipulated. Here, we tested for object location encoding in shape-selective occipitotemporal cortex using manipulable object stimuli (balls and cups) in a MVPA fMRI study. We employed Bayesian analyses to determine s le size and evaluate the sensitivity of our data to test the hypothesis that location can be encoded in a spatiotopic reference frame in shape-selective occipitotemporal cortex over the null hypothesis of no spatiotopic location encoding. We found strong evidence for retinotopic location encoding consistent with previous findings that retinotopic reference frames are common neural representations of object location. In contrast, when testing for spatiotopic encoding, we found evidence that object location information for small manipulable objects is not decodable in relation to the body in shape-selective occipitotemporal cortex. Post-hoc exploratory analyses suggested that spatiotopic aspects might modulate retinotopic location encoding. Overall, our findings provide evidence that there is no spatiotopic encoding that is independent of retinotopic location in shape-selective occipitotemporal cortex.
Publisher: Informa UK Limited
Date: 29-09-2014
DOI: 10.1080/13554794.2014.960429
Abstract: The temporal scale of neuroplasticity following acute alterations in brain structure due to neurosurgical intervention is still under debate. We conducted a longitudinal study with the objective of investigating the postoperative changes in a patient who underwent cerebrovascular surgery and who subsequently lost proprioception in the fingers of her right hand. The results show increased activation in contralesional somatosensory areas, additional recruitment of premotor and posterior parietal areas, and changes in functional connectivity with left postcentral gyrus. These findings demonstrate long-term modifications of cortical organization and as such have important implications for treatment strategies for patients with brain injury.
Publisher: Public Library of Science (PLoS)
Date: 16-12-2019
Publisher: Frontiers Media SA
Date: 22-08-2018
Publisher: Cold Spring Harbor Laboratory
Date: 16-07-2018
DOI: 10.1101/369926
Abstract: Colour is a defining feature of many objects, playing a crucial role in our ability to rapidly recognise things in the world around us and make categorical distinctions. For ex le, colour is a useful cue when distinguishing lemons from limes or blackberries from raspberries. That means our representation of many objects includes key colour-related information. The question addressed here is whether the neural representation activated by knowing that something is red is the same as that activated when we actually see something red, particularly in regard to timing. We addressed this question using neural timeseries (magnetoencephalography, MEG) data to contrast real colour perception and implied object colour activation. We applied multivariate pattern analysis (MVPA) to analyse the brain activation patterns evoked by colour accessed via real colour perception and implied colour activation. Applying MVPA to MEG data allows us here to focus on the temporal dynamics of these processes. Male and female human participants (N=18) viewed isoluminant red and green shapes and grey-scale, luminance-matched pictures of fruits and vegetables that are red (e.g., tomato) or green (e.g., kiwifruit) in nature. We show that the brain activation pattern evoked by real colour perception is similar to implied colour activation, but that this pattern is instantiated at a later time. These results suggest that a common colour representation can be triggered by activating object representations from memory and perceiving colours.
Publisher: Springer Science and Business Media LLC
Date: 19-06-2019
DOI: 10.3758/S13414-019-01782-9
Abstract: The human visual system is capable of processing an enormous amount of information in a short time. Although rapid target detection has been explored extensively, less is known about target localization. Here we used natural scenes and explored the relationship between being able to detect a target (present vs. absent) and being able to localize it. Across four presentation durations (~ 33-199 ms), participants viewed scenes taken from two superordinate categories (natural and manmade), each containing exemplars from four basic scene categories. In a two-interval forced choice task, observers were asked to detect a Gabor target inserted in one of the two scenes. This was followed by one of two different localization tasks. Participants were asked either to discriminate whether the target was on the left or the right side of the display or to click on the exact location where they had seen the target. Targets could be detected and localized at our shortest exposure duration (~ 33 ms), with a predictable improvement in performance with increasing exposure duration. We saw some evidence at this shortest duration of detection without localization, but further analyses demonstrated that these trials typically reflected coarse or imprecise localization information, rather than its complete absence. Experiment 2 replicated our main findings while exploring the effect of the level of "openness" in the scene. Our results are consistent with the notion that when we are able to extract what objects are present in a scene, we also have information about where each object is, which provides crucial guidance for our goal-directed actions.
Publisher: Cold Spring Harbor Laboratory
Date: 03-2021
DOI: 10.1101/2021.02.28.433294
Abstract: Attention and decision-making processes are fundamental to cognition. However, they are usually experimentally confounded, making it impossible to link neural observations to specific processes. Here we separated the effects of selective attention from the effects of decision-making in human observers using a two-stage task where the attended stimulus and decision were orthogonal and separated in time. Multivariate pattern analyses of multimodal neuroimaging data revealed the dynamics of perceptual and decision-related information coding through time (magnetoencephalography (MEG)), space (functional Magnetic Resonance Imaging (fMRI)), and their combination (MEG-fMRI fusion). Our MEG results showed an effect of attention before decision-making could begin, and fMRI results showed an attention effect in early visual and frontoparietal regions. Model-based MEG-fMRI fusion suggested that attention boosted stimulus information in frontoparietal and early visual regions before decision-making was possible. Together, our results suggest that attention affects neural stimulus representations in frontoparietal regions independent of decision-making.
Publisher: Frontiers Media SA
Date: 11-05-2016
Publisher: MIT Press
Date: 07-2018
DOI: 10.1162/JOCN_A_01257
Abstract: Numerical format describes the way magnitude is conveyed, for ex le, as a digit (“3”) or Roman numeral (“III”). In the field of numerical cognition, there is an ongoing debate of whether magnitude representation is independent of numerical format. Here, we examine the time course of magnitude processing when using different symbolic formats. We presented participants with a series of digits and dice patterns corresponding to the magnitudes of 1 to 6 while performing a 1-back task on magnitude. Magnetoencephalography offers an opportunity to record brain activity with high temporal resolution. Multivariate pattern analysis applied to magnetoencephalographic data allows us to draw conclusions about brain activation patterns underlying information processing over time. The results show that we can cross-decode magnitude when training the classifier on magnitude presented in one symbolic format and testing the classifier on the other symbolic format. This suggests a similar representation of these numerical symbols. In addition, results from a time generalization analysis show that digits were accessed slightly earlier than dice, demonstrating temporal asynchronies in their shared representation of magnitude. Together, our methods allow a distinction between format-specific signals and format-independent representations of magnitude showing evidence that there is a shared representation of magnitude accessed via different symbols.
Publisher: Cold Spring Harbor Laboratory
Date: 26-05-2021
DOI: 10.1101/2021.05.25.445701
Abstract: The human brain is extremely flexible and capable of rapidly selecting relevant information in accordance with task goals. Regions of frontoparietal cortex flexibly represent relevant task information such as task rules and stimulus features when participants perform tasks successfully, but less is known about how information processing breaks down when participants make mistakes. This is important for understanding whether and when information coding recorded with neuroimaging is directly meaningful for behaviour. Here, we used magnetoencephalography (MEG) to assess the temporal dynamics of information processing, and linked neural responses with goal-directed behaviour by analysing how they changed on behavioural error. Participants performed a difficult stimulus-response task using two stimulus-response mapping rules. We used time-resolved multivariate pattern analysis to characterise the progression of information coding from perceptual information about the stimulus, cue and rule coding, and finally, motor response. Response-aligned analyses revealed a r ing up of perceptual information prior to a correct response, suggestive of internal evidence accumulation. Strikingly, when participants made a stimulus-related error, and not when they made other types of errors, patterns of activity initially reflected the stimulus presented, but later reversed, and accumulated towards a representation of the incorrect stimulus. This suggests that the patterns recorded at later timepoints reflect an internally generated stimulus representation that was used to make the (incorrect) decision. These results illustrate the orderly and overlapping temporal dynamics of information coding in perceptual decision-making and show a clear link between neural patterns in the late stages of processing and behaviour.
Publisher: Informa UK Limited
Date: 03-2013
Publisher: Elsevier BV
Date: 03-2010
DOI: 10.1016/J.COGNITION.2009.10.003
Abstract: Mechanisms of selective attention exert a powerful influence on visual perception. We examined whether attentional selection is necessary for generation of the vivid colours experienced by in iduals with grapheme-colour synaesthesia. Twelve synaesthetes and matched controls viewed rapid serial displays of nonsense characters within which were embedded an oriented grating (T1) and a letter-prime (T2), forming a modified attentional blink (AB) task. At the end of the stream a coloured probe appeared that was either congruent or incongruent with the synaesthetic colour elicited by the letter-prime. When the prime was attended, synaesthetes showed a reliable effect of prime-probe congruency. In contrast, when the prime appeared at 350 ms following T1 (during the AB), the congruency effect was eliminated. Our findings suggest that focused attention is crucial for inducing letters to elicit colours in synaesthesia.
Publisher: Oxford University Press (OUP)
Date: 04-12-2016
Abstract: The consequences of losing the ability to move a limb are traumatic. One approach that examines the impact of pathological limb nonuse on the brain involves temporary immobilization of a healthy limb. Here, we investigated immobilization-induced plasticity in the motor imagery (MI) circuitry during hand immobilization. We assessed these changes with a multimodal paradigm, using functional magnetic resonance imaging (fMRI) to measure neural activation, magnetoencephalography (MEG) to track neuronal oscillatory dynamics, and transcranial magnetic stimulation (TMS) to assess corticospinal excitability. fMRI results show a significant decrease in neural activation for MI of the constrained hand, localized to sensorimotor areas contralateral to the immobilized hand. MEG results show a significant decrease in beta desynchronization and faster resynchronization in sensorimotor areas contralateral to the immobilized hand. TMS results show a significant increase in resting motor threshold in motor cortex contralateral to the constrained hand, suggesting a decrease in corticospinal excitability in the projections to the constrained hand. These results demonstrate a direct and rapid effect of immobilization on MI processes of the constrained hand, suggesting that limb nonuse may not only affect motor execution, as evidenced by previous studies, but also MI. These findings have important implications for the effectiveness of therapeutic approaches that use MI as a rehabilitation tool to ameliorate the negative effects of limb nonuse.
Publisher: Elsevier BV
Date: 05-2013
DOI: 10.1016/J.NEUROIMAGE.2013.01.001
Abstract: Neuroimaging studies have shown that the neural mechanisms of motor imagery (MI) overlap substantially with the mechanisms of motor execution (ME). Surprisingly, however, the role of several regions of the motor circuitry in MI remains controversial, a variability that may be due to differences in neuroimaging techniques, MI training, instruction types, or tasks used to evoke MI. The objectives of this study were twofold: (i) to design a novel task that reliably invokes MI, provides a reliable behavioral measure of MI performance, and is transferable across imaging modalities and (ii) to measure the common and differential activations for MI and ME with functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG). We present a task in which it is difficult to give accurate responses without the use of either motor execution or motor imagery. The behavioral results demonstrate that participants performed similarly on the task when they imagined vs. executed movements and this performance did not change over time. The fMRI results show a spatial overlap of MI and ME in a number of motor and premotor areas, sensory cortices, cerebellum, inferior frontal gyrus, and ventrolateral thalamus. MI uniquely engaged bilateral occipital areas, left parahippoc us, and other temporal and frontal areas, whereas ME yielded unique activity in motor and sensory areas, cerebellum, precuneus, and putamen. The MEG results show a robust event-related beta band desynchronization in the proximity of primary motor and premotor cortices during both ME and MI. Together, these results further elucidate the neural circuitry of MI and show that our task robustly and reliably invokes motor imagery, and thus may prove useful for interrogating the functional status of the motor circuitry in patients with motor disorders.
Publisher: Elsevier BV
Date: 11-2005
DOI: 10.1016/J.COGNITION.2004.11.003
Abstract: For in iduals with synaesthesia, stimuli in one sensory modality elicit anomalous experiences in another modality. For ex le, the sound of a particular piano note may be 'seen' as a unique colour, or the taste of a familiar food may be 'felt' as a distinct bodily sensation. We report a study of 192 adult synaesthetes, in which we administered a structured questionnaire to determine the relative frequency and characteristics of different types of synaesthetic experience. Our data suggest the prevalence of synaesthesia in the adult population is approximately 1 in 1150 females and 1 in 7150 males. The incidence of left-handedness in our s le was within the normal range, contrary to previous claims. We did, however, find that synaesthetes are more likely to be involved in artistic pursuits, consistent with anecdotal reports. We also examined responses from a subset of 150 synaesthetes for whom letters, digits and words induce colour experiences ('lexical-colour' synaesthesia). There was a striking consistency in the colours induced by certain letters and digits in these in iduals. For ex le, 'R' elicited red for 36% of the s le, 'Y' elicited yellow for 45%, and 'D' elicited brown for 47%. Similar trends were apparent for a group of non-synaesthetic controls who were asked to associate colours with letters and digits. Based on these findings, we suggest that the development of lexical-colour synaesthesia in many cases incorporates early learning experiences common to all in iduals. Moreover, many of our synaesthetes experienced colours only for days of the week, letters or digits, suggesting that inducers that are part of a conventional sequence (e.g. Monday, Tuesday, Wednesday... A, B, C... 1, 2, 3...) may be particularly important in the development of synaesthetic inducer-colour pairs. We speculate that the learning of such sequences during an early critical period determines the particular pattern of lexical-colour links, and that this pattern then generalises to other words.
Publisher: Informa UK Limited
Date: 20-04-2015
Publisher: Elsevier BV
Date: 10-2010
DOI: 10.1016/J.NEUROPSYCHOLOGIA.2010.07.029
Abstract: Synaesthesia for pain is a phenomenon where a person experiences pain when observing or imagining another in pain. Anecdotal reports of this type of experience have most commonly occurred in in iduals who have lost a limb. Distinct from phantom pain, synaesthesia for pain is triggered specifically in response to pain in another. Here, we provide the first preliminary investigation into synaesthesia for pain in utees to determine the incidence and characteristics of this intriguing phenomenon. Self-referring utees (n=74) answered questions on synaesthesia for pain within a broader survey of phantom pain. Of the participants, 16.2% reported that observing or imagining pain in another person triggers their phantom pain. Further understanding of synaesthesia for pain may provide a greater insight to abnormal empathic function in clinical populations as well as therapeutic intervention for at risk groups.
Publisher: Public Library of Science (PLoS)
Date: 15-02-2013
Publisher: Elsevier BV
Date: 2006
DOI: 10.1016/S0010-9452(08)70346-0
Abstract: One of the hallmarks of grapheme-colour synaesthesia is that colours induced by letters, digits and words tend to interfere with the identification of coloured targets when the two colours are different, i.e., when they are incongruent. In a previous investigation (Mattingley et al., 2001) we found that this synaesthetic congruency effect occurs when an achromatic-letter prime precedes a coloured target, but that the effect disappears when the letter is pattern masked to prevent conscious recognition of its identity. Here we investigated whether selective attention modulates the synaesthetic congruency effect in a letter-priming task. Fourteen grapheme-colour synaesthetes and 14 matched, non-synaesthetic controls participated. The amount of selective attention available to process the letter-prime was limited by having participants perform a secondary visual task that involved discriminating pairs of gaps in adjacent limbs of a diamond surrounding the prime. In separate blocks of trials the attentional load of the secondary task was systematically varied to yield 'low load' and 'high load' conditions. We found a significant congruency effect for synaesthetes, but not for controls, when they performed a secondary attention-demanding task during presentation of the letter prime. Crucially, however, the magnitude of this priming was significantly reduced under conditions of high-load relative to low-load, indicating that attention plays an important role in modulating synaesthesia. Our findings help to explain the observation that synaesthetic colour experiences are often weak or absent during attention-demanding tasks.
Publisher: Informa UK Limited
Date: 03-2010
Publisher: Elsevier BV
Date: 08-2018
Publisher: Elsevier BV
Date: 02-2021
Publisher: Elsevier BV
Date: 02-2011
DOI: 10.1016/J.NEUROIMAGE.2010.11.045
Abstract: Orientation discrimination is much better for patterns oriented along the horizontal or vertical (cardinal) axes than for patterns oriented obliquely, but the neural basis for this is not known. Previous animal neurophysiology and human neuroimaging studies have demonstrated only a moderate bias for cardinal versus oblique orientations, with fMRI showing a larger response to cardinals in primary visual cortex (V1) and EEG demonstrating both increased magnitudes and reduced latencies of transient evoked responses. Here, using MEG, we localised and characterised induced gamma and transient evoked responses to stationary circular grating patches of three orientations (0, 45, and 90° from vertical). Surprisingly, we found that the sustained gamma response was larger for oblique, compared to cardinal, stimuli. This "inverse oblique effect" was also observed in the earliest (80 ms) evoked response, whereas later responses (120 ms) showed a trend towards the reverse, "classic", oblique response. Source localisation demonstrated that the sustained gamma and early evoked responses were localised to medial visual cortex, whilst the later evoked responses came from both this early visual area and a source in a more inferolateral extrastriate region. These results suggest that (1) the early evoked and sustained gamma responses manifest the initial tuning of V1 neurons, with the stronger response to oblique stimuli possibly reflecting increased tuning widths for these orientations, and (2) the classic behavioural oblique effect is mediated by an extrastriate cortical area and may also implicate feedback from extrastriate to primary visual cortex.
Publisher: SPIE
Date: 04-04-2022
DOI: 10.1117/12.2607316
Publisher: Elsevier BV
Date: 04-2015
DOI: 10.1016/J.NEUROSCIENCE.2015.01.049
Abstract: Healthy aging is accompanied by neurobiological changes that affect the brain's functional organization and the in idual's cognitive abilities. The aim of this study was to investigate the effect of global age-related differences in the cortical white and gray matter on neural activity in three key large-scale networks. We used functional-structural covariance network analysis to assess resting state activity in the default mode network (DMN), the fronto-parietal network (FPN), and the salience network (SN) of young and older adults. We further related this functional activity to measures of cortical thickness and volume derived from structural MRI, as well as to measures of white matter integrity (fractional anisotropy [FA], mean diffusivity [MD], and radial diffusivity [RD]) derived from diffusion-weighted imaging. First, our results show that, in the direct comparison of resting state activity, young but not older adults reliably engage the SN and FPN in addition to the DMN, suggesting that older adults recruit these networks less consistently. Second, our results demonstrate that age-related decline in white matter integrity and gray matter volume is associated with activity in prefrontal nodes of the SN and FPN, possibly reflecting compensatory mechanisms. We suggest that age-related differences in gray and white matter properties differentially affect the ability of the brain to engage and coordinate large-scale functional networks that are central to efficient cognitive functioning.
Publisher: SAGE Publications
Date: 2012
DOI: 10.1068/P7161
Abstract: The brain constantly integrates incoming signals across the senses to form a cohesive view of the world. Most studies on multisensory integration concern the roles of spatial and temporal parameters. However, recent findings suggest cross-modal correspondences (eg high-pitched sounds associated with bright, small objects located high up) also affect multisensory integration. Here, we focus on the association between auditory pitch and spatial location. Surprisingly little is known about the cognitive and perceptual roots of this phenomenon, despite its long use in ergonomic design. In a series of experiments, we explore how this cross-modal mapping affects the allocation of attention with an attentional cuing paradigm. Our results demonstrate that high and low tones induce attention shifts to upper or lower locations, depending on pitch height. Furthermore, this pitch-induced cuing effect is susceptible to contextual manipulations and volitional control. These findings suggest the cross-modal interaction between pitch and location originates from an attentional level rather than from response mapping alone. The flexible contextual mapping between pitch and location, as well as its susceptibility to top–down control, suggests the pitch-induced cuing effect is primarily mediated by cognitive processes after initial sensory encoding and occurs at a relatively late stage of voluntary attention orienting.
Publisher: Springer Science and Business Media LLC
Date: 03-2001
DOI: 10.1038/35069062
Publisher: MIT Press - Journals
Date: 05-2014
DOI: 10.1162/JOCN_A_00536
Abstract: Object recognition benefits greatly from our knowledge of typical color (e.g., a lemon is usually yellow). Most research on object color knowledge focuses on whether both knowledge and perception of object color recruit the well-established neural substrates of color vision (the V4 complex). Compared with the intensive investigation of the V4 complex, we know little about where and how neural mechanisms beyond V4 contribute to color knowledge. The anterior temporal lobe (ATL) is thought to act as a “hub” that supports semantic memory by integrating different modality-specific contents into a meaningful entity at a supramodal conceptual level, making it a good candidate zone for mediating the mappings between object attributes. Here, we explore whether the ATL is critical for integrating typical color with other object attributes (object shape and name), akin to its role in combining nonperceptual semantic representations. In separate experimental sessions, we applied TMS to disrupt neural processing in the left ATL and a control site (the occipital pole). Participants performed an object naming task that probes color knowledge and elicits a reliable color congruency effect as well as a control quantity naming task that also elicits a cognitive congruency effect but involves no conceptual integration. Critically, ATL stimulation eliminated the otherwise robust color congruency effect but had no impact on the numerical congruency effect, indicating a selective disruption of object color knowledge. Neither color nor numerical congruency effects were affected by stimulation at the control occipital site, ruling out nonspecific effects of cortical stimulation. Our findings suggest that the ATL is involved in the representation of object concepts that include their canonical colors.
Publisher: American Speech Language Hearing Association
Date: 17-07-2020
DOI: 10.1044/2020_JSLHR-19-00313
Abstract: We aimed to develop a noninvasive neural test of language comprehension to use with nonspeaking children for whom standard behavioral testing is unreliable (e.g., minimally verbal autism). Our aims were threefold. First, we sought to establish the sensitivity of two auditory paradigms to elicit neural responses in in idual neurotypical children. Second, we aimed to validate the use of a portable and accessible electroencephalography (EEG) system, by comparing its recordings to those of a research-grade system. Third, in light of substantial interin idual variability in in iduals' neural responses, we assessed whether multivariate decoding methods could improve sensitivity. We tested the sensitivity of two child-friendly covert N400 paradigms. Thirty-one typically developing children listened to identical spoken words that were either strongly predicted by the preceding context or violated lexical–semantic expectations. Context was given by a cue word (Experiment 1) or sentence frame (Experiment 2), and participants either made an overall judgment on word relatedness or counted lexical–semantic violations. We measured EEG concurrently from a research-grade system, Neuroscan's SynAmps2, and an adapted gaming system, Emotiv's EPOC+. We found substantial interin idual variability in the timing and topology of N400-like effects. For both paradigms and EEG systems, traditional N400 effects at the expected sensors and time points were statistically significant in around 50% of in iduals. Using multivariate analyses, detection rate increased to 88% of in iduals for the research-grade system in the sentences paradigm, illustrating the robustness of this method in the face of interin idual variations in topography. There was large interin idual variability in neural responses, suggesting interin idual variation in either the cognitive response to lexical–semantic violations and/or the neural substrate of that response. Around half of our neurotypical participants showed the expected N400 effect at the expected location and time points. A low-cost, accessible EEG system provided comparable data for univariate analysis but was not well suited to multivariate decoding. However, multivariate analyses with a research-grade EEG system increased our detection rate to 88% of in iduals. This approach provides a strong foundation to establish a neural index of language comprehension in children with limited communication. 0.23641/asha.12606311
Publisher: MIT Press - Journals
Date: 10-2015
DOI: 10.1162/JOCN_A_00827
Abstract: How do our brains achieve the cognitive control that is required for flexible behavior? Several models of cognitive control propose a role for frontoparietal cortex in the structure and representation of task sets or rules. For behavior to be flexible, however, the system must also rapidly reorganize as mental focus changes. Here we used multivoxel pattern analysis of fMRI data to demonstrate adaptive reorganization of frontoparietal activity patterns following a change in the complexity of the task rules. When task rules were relatively simple, frontoparietal cortex did not hold detectable information about these rules. In contrast, when the rules were more complex, frontoparietal cortex showed clear and decodable rule discrimination. Our data demonstrate that frontoparietal activity adjusts to task complexity, with better discrimination of rules that are behaviorally more confusable. The change in coding was specific to the rule element of the task and was not mirrored in more specialized cortex (early visual cortex) where coding was independent of difficulty. In line with an adaptive view of frontoparietal function, the data suggest a system that rapidly reconfigures in accordance with the difficulty of a behavioral task. This system may provide a neural basis for the flexible control of human behavior.
Publisher: Oxford University Press (OUP)
Date: 12-02-2015
Abstract: Many theories of visual object perception assume the visual system initially extracts borders between objects and their background and then "fills in" color to the resulting object surfaces. We investigated the transformation of chromatic signals across the human ventral visual stream, with particular interest in distinguishing representations of object surface color from representations of chromatic signals reflecting the retinal input. We used fMRI to measure brain activity while participants viewed figure-ground stimuli that differed either in the position or in the color contrast polarity of the foreground object (the figure). Multivariate pattern analysis revealed that classifiers were able to decode information about which color was presented at a particular retinal location from early visual areas, whereas regions further along the ventral stream exhibited biases for representing color as part of an object's surface, irrespective of its position on the retina. Additional analyses showed that although activity in V2 contained strong chromatic contrast information to support the early parsing of objects within a visual scene, activity in this area also signaled information about object surface color. These findings are consistent with the view that mechanisms underlying scene segmentation and the binding of color to object surfaces converge in V2.
Publisher: Cold Spring Harbor Laboratory
Date: 26-06-2022
DOI: 10.1101/2022.06.21.497107
Abstract: Simulation theories propose that vicarious touch arises when seeing someone else being touched triggers corresponding representations of being touched. Prior electroencephalography (EEG) findings show that seeing touch modulates both early and late somatosensory responses (measured with or without direct tactile stimulation). Functional Magnetic Resonance Imaging (fMRI) studies have shown that seeing touch increases somatosensory cortical activation. These findings have been taken to suggest that when we see someone being touched, we simulate that touch in our sensory systems. The somatosensory overlap when seeing and feeling touch differs between in iduals, potentially underpinning variation in vicarious touch experiences. Increases in litude (EEG) or cerebral blood flow response (fMRI), however, are limited in that they cannot test for the information contained in the neural signal: seeing touch may not activate the same information as feeling touch. Here, we use time-resolved multivariate pattern analysis on whole-brain EEG data from people with and without vicarious touch experiences to test whether seen touch evokes overlapping neural representations with the first-hand experience of touch. Participants felt touch to the fingers ( tactile trials) or watched carefully matched videos of touch to another person’s fingers ( visual trials). In both groups, EEG was sufficiently sensitive to allow decoding of touch location (little finger vs. thumb) on tactile trials. However, only in in iduals who reported feeling touch when watching videos of touch could a classifier trained on tactile trials distinguish touch location on visual trials. This demonstrates that, for people who experience vicarious touch, there is overlap in the information about touch location held in the neural patterns when seeing and feeling touch. The timecourse of this overlap implies that seeing touch evokes similar representations to later stages of tactile processing. Therefore, while simulation may underlie vicarious tactile sensations, our findings suggest this involves an abstracted representation of directly felt touch.
Publisher: Public Library of Science (PLoS)
Date: 30-01-2020
Publisher: Cold Spring Harbor Laboratory
Date: 25-05-2021
DOI: 10.1101/2021.05.24.445376
Abstract: Selective attention prioritises relevant information amongst competing sensory input. Time-resolved electrophysiological studies have shown stronger representation of attended compared to unattended stimuli, which has been interpreted as an effect of attention on information coding. However, because attention is often manipulated by making only the attended stimulus a target to be remembered and/or responded to, many reported attention effects have been confounded with target-related processes such as visual short-term memory or decision-making. In addition, the effects of attention could be influenced by temporal expectation. The aim of this study was to investigate the dynamic effect of attention on visual processing using multivariate pattern analysis of electroencephalography (EEG) data, while 1) controlling for target-related confounds, and 2) directly investigating the influence of temporal expectation. Participants viewed rapid sequences of overlaid oriented grating pairs at fixation while detecting a “target” grating of a particular orientation. We manipulated attention, one grating was attended and the other ignored, and temporal expectation, with stimulus onset timing either predictable or not. We controlled for target-related processing confounds by only analysing non-target trials. Both attended and ignored gratings were initially coded equally in the pattern of responses across EEG sensors. An effect of attention, with preferential coding of the attended stimulus, emerged approximately 230ms after stimulus onset. This attention effect occurred even when controlling for target-related processing confounds, and regardless of stimulus onset predictability. These results provide insight into the effect of attention on the dynamic processing of competing visual information, presented at the same time and location.
Publisher: Macquarie Centre for Cognitive Science
Date: 2010
DOI: 10.5096/ASCS200955
Publisher: SAGE Publications
Date: 14-07-2010
Abstract: Decades of research suggest that selective attention is critical for binding the features of objects together for conscious perception. A fundamental question, however, remains unresolved: How do people perceive objects, albeit with binding errors ( illusory conjunctions), when attentional resolution is poor? We used a novel technique to investigate how features are selected to create percepts of bound objects. We measured the correlation of errors (intrusions) in color and identity reports in spatial and temporal selection tasks under conditions of varying spatial or temporal uncertainty. Our findings suggest that attention selects each feature independently by randomly s ling from a probability distribution over space or time. Thus, veridical perception of bound object features arises only when attentional selection is sufficiently precise that the independently s led features originate from a single object.
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 09-2015
DOI: 10.1167/15.12.1252
Publisher: Elsevier BV
Date: 09-2023
Publisher: Springer Science and Business Media LLC
Date: 2002
DOI: 10.1038/NRN702
Publisher: Frontiers Media SA
Date: 2014
Publisher: SAGE Publications
Date: 11-11-2021
DOI: 10.1177/09567976211021843
Abstract: Rewards exert a deep influence on our cognition and behavior. Here, we used a paradigm in which reward information was provided at either encoding or retrieval of a brief, masked stimulus to show that reward can also rapidly modulate perceptual encoding of visual information. Experiment 1 ( n = 30 adults) showed that participants’ response accuracy was enhanced when a to-be-encoded grating signaled high reward relative to low reward, but only when the grating was presented very briefly and participants reported that they were not consciously aware of it. Experiment 2 ( n = 29 adults) showed that there was no difference in participants’ response accuracy when reward information was instead provided at the stage of retrieval, ruling out an explanation of the reward-modulation effect in terms of differences in motivated retrieval. Taken together, our findings provide behavioral evidence consistent with a rapid reward modulation of visual perception, which may not require consciousness.
Publisher: Springer Science and Business Media LLC
Date: 18-04-2018
Publisher: Elsevier BV
Date: 09-2020
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 03-04-2018
DOI: 10.1167/18.4.2
Abstract: In real-world searches such as airport baggage screening and radiological examinations, miss errors can be life threatening. Misses increase for additional targets after detecting an initial target, termed "subsequent search misses" (SSMs), and also when targets are more often absent than present, termed the low-prevalence effect. Real-world search tasks often contain more than one target, but the prevalence of these multitarget occasions varies. For ex le, a cancerous tumor sometimes coexists with a benign tumor and sometimes exists alone. This study aims to investigate how the relative prevalence of multiple targets affects search accuracy. Naive observers searched for all Ts (zero, one, or two) among Ls. In Experiment 1, SSMs occurred in small but not large set sizes, which may be explained by classic capacity limit effects such as the attentional blink and repetition blindness. Experiment 2 showed an interaction between SSMs and the relative prevalence of dual-target trials: Low prevalence of dual-target trials increased SSMs relative to high prevalence dual-target trials. The prevalence of dual-target trials did not affect accuracy on single-target trials. These results may provide a novel avenue for reducing misses by increasing the prevalence of instances with multiple targets. Future efforts should take into account the relative prevalence of multiple targets to effectively reduce life-threatening miss errors.
Publisher: Springer Science and Business Media LLC
Date: 13-02-2013
Publisher: Proceedings of the National Academy of Sciences
Date: 02-2021
Abstract: Grapheme-color synesthetes experience color when seeing achromatic symbols. We examined whether similar neural mechanisms underlie color perception and synesthetic colors using magnetoencephalography. Classification models trained on neural activity from viewing colored stimuli could distinguish synesthetic color evoked by achromatic symbols after a delay of ∼100 ms. Our results provide an objective neural signature for synesthetic experience and temporal evidence consistent with higher-level processing in synesthesia.
Publisher: MIT Press - Journals
Date: 02-2017
DOI: 10.1162/JOCN_A_01039
Abstract: Human cognition is characterized by astounding flexibility, enabling us to select appropriate information according to the objectives of our current task. A circuit of frontal and parietal brain regions, often referred to as the frontoparietal attention network or multiple-demand (MD) regions, are believed to play a fundamental role in this flexibility. There is evidence that these regions dynamically adjust their responses to selectively process information that is currently relevant for behavior, as proposed by the “adaptive coding hypothesis” [Duncan, J. An adaptive coding model of neural function in prefrontal cortex. Nature Reviews Neuroscience, 2, 820–829, 2001]. Could this provide a neural mechanism for feature-selective attention, the process by which we preferentially process one feature of a stimulus over another? We used multivariate pattern analysis of fMRI data during a perceptually challenging categorization task to investigate whether the representation of visual object features in the MD regions flexibly adjusts according to task relevance. Participants were trained to categorize visually similar novel objects along two orthogonal stimulus dimensions (length/orientation) and performed short alternating blocks in which only one of these dimensions was relevant. We found that multivoxel patterns of activation in the MD regions encoded the task-relevant distinctions more strongly than the task-irrelevant distinctions: The MD regions discriminated between stimuli of different lengths when length was relevant and between the same objects according to orientation when orientation was relevant. The data suggest a flexible neural system that adjusts its representation of visual objects to preferentially encode stimulus features that are currently relevant for behavior, providing a neural mechanism for feature-selective attention.
Publisher: Elsevier BV
Date: 08-2022
Publisher: Springer Science and Business Media LLC
Date: 28-04-2022
DOI: 10.1038/S41598-022-10687-X
Abstract: Selective attention prioritises relevant information amongst competing sensory input. Time-resolved electrophysiological studies have shown stronger representation of attended compared to unattended stimuli, which has been interpreted as an effect of attention on information coding. However, because attention is often manipulated by making only the attended stimulus a target to be remembered and/or responded to, many reported attention effects have been confounded with target-related processes such as visual short-term memory or decision-making. In addition, attention effects could be influenced by temporal expectation about when something is likely to happen. The aim of this study was to investigate the dynamic effect of attention on visual processing using multivariate pattern analysis of electroencephalography (EEG) data, while (1) controlling for target-related confounds, and (2) directly investigating the influence of temporal expectation. Participants viewed rapid sequences of overlaid oriented grating pairs while detecting a “target” grating of a particular orientation. We manipulated attention, one grating was attended and the other ignored (cued by colour), and temporal expectation, with stimulus onset timing either predictable or not. We controlled for target-related processing confounds by only analysing non-target trials. Both attended and ignored gratings were initially coded equally in the pattern of responses across EEG sensors. An effect of attention, with preferential coding of the attended stimulus, emerged approximately 230 ms after stimulus onset. This attention effect occurred even when controlling for target-related processing confounds, and regardless of stimulus onset expectation. These results provide insight into the effect of feature-based attention on the dynamic processing of competing visual information.
Publisher: Springer Science and Business Media LLC
Date: 02-11-2021
Start Date: 12-2022
End Date: 12-2025
Amount: $405,924.00
Funder: Australian Research Council
View Funded ActivityStart Date: 07-2023
End Date: 06-2027
Amount: $1,124,583.00
Funder: Australian Research Council
View Funded ActivityStart Date: 08-2017
End Date: 04-2023
Amount: $280,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 06-2017
End Date: 12-2021
Amount: $291,500.00
Funder: Australian Research Council
View Funded ActivityStart Date: 2017
End Date: 12-2022
Amount: $397,500.00
Funder: Australian Research Council
View Funded ActivityStart Date: 2012
End Date: 12-2015
Amount: $246,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 06-2009
End Date: 12-2013
Amount: $353,000.00
Funder: Australian Research Council
View Funded Activity