ORCID Profile
0000-0002-7378-2803
Current Organisations
University of Sydney
,
University of Queensland
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
In Research Link Australia (RLA), "Research Topics" refer to ANZSRC FOR and SEO codes. These topics are either sourced from ANZSRC FOR and SEO codes listed in researchers' related grants or generated by a large language model (LLM) based on their publications.
Sensory Processes, Perception and Performance | Psychology
Publisher: Cold Spring Harbor Laboratory
Date: 26-05-2021
DOI: 10.1101/2021.05.25.445701
Abstract: The human brain is extremely flexible and capable of rapidly selecting relevant information in accordance with task goals. Regions of frontoparietal cortex flexibly represent relevant task information such as task rules and stimulus features when participants perform tasks successfully, but less is known about how information processing breaks down when participants make mistakes. This is important for understanding whether and when information coding recorded with neuroimaging is directly meaningful for behaviour. Here, we used magnetoencephalography (MEG) to assess the temporal dynamics of information processing, and linked neural responses with goal-directed behaviour by analysing how they changed on behavioural error. Participants performed a difficult stimulus-response task using two stimulus-response mapping rules. We used time-resolved multivariate pattern analysis to characterise the progression of information coding from perceptual information about the stimulus, cue and rule coding, and finally, motor response. Response-aligned analyses revealed a r ing up of perceptual information prior to a correct response, suggestive of internal evidence accumulation. Strikingly, when participants made a stimulus-related error, and not when they made other types of errors, patterns of activity initially reflected the stimulus presented, but later reversed, and accumulated towards a representation of the incorrect stimulus. This suggests that the patterns recorded at later timepoints reflect an internally generated stimulus representation that was used to make the (incorrect) decision. These results illustrate the orderly and overlapping temporal dynamics of information coding in perceptual decision-making and show a clear link between neural patterns in the late stages of processing and behaviour.
Publisher: IEEE
Date: 2017
Publisher: Society for Neuroscience
Date: 23-07-2020
Publisher: The Neurons Behavior Data Analysis and Theory collective
Date: 17-02-2021
DOI: 10.51628/001C.21174
Abstract: The human brain prioritises relevant sensory information to perform different tasks. Enhancement of task-relevant information requires flexible allocation of attentional resources, but it is still a mystery how this is operationalised in the brain. We investigated how attentional mechanisms operate in situations where multiple stimuli are presented in the same location and at the same time. In two experiments, participants performed a challenging two-back task on different types of visual stimuli that were presented simultaneously and superimposed over each other. Using electroencephalography and multivariate decoding, we analysed the effect of attention on the neural responses to each in idual stimulus. Whole brain neural responses contained considerable information about both the attended and unattended stimuli, even though they were presented simultaneously and represented in overlapping receptive fields. As expected, attention increased the decodability of stimulus-related information contained in the neural responses, but this effect was evident earlier for stimuli that were presented at smaller sizes. Our results show that early neural responses to stimuli in fast-changing displays contain remarkable information about the sensory environment but are also modulated by attention in a manner dependent on perceptual characteristics of the relevant stimuli. Stimuli, code, and data for this study can be found at osf.io/7zhwp/.
Publisher: Springer International Publishing
Date: 2018
Publisher: Frontiers Media SA
Date: 2013
Publisher: Cold Spring Harbor Laboratory
Date: 12-04-2019
DOI: 10.1101/607499
Abstract: How are visual inputs transformed into conceptual representations by the human visual system? The contents of human perception, such as objects presented on a visual display, can reliably be decoded from voxel activation patterns in fMRI, and in evoked sensor activations in MEG and EEG. A prevailing question is the extent to which brain activation associated with object categories is due to statistical regularities of visual features within object categories. Here, we assessed the contribution of mid-level features to conceptual category decoding using EEG and a novel fast periodic decoding paradigm. Our study used a stimulus set consisting of intact objects from the animate (e.g., fish) and inanimate categories (e.g., chair) and scrambled versions of the same objects that were unrecognizable and preserved their visual features (Long, Yu, & Konkle, 2018). By presenting the images at different periodic rates, we biased processing to different levels of the visual hierarchy. We found that scrambled objects and their intact counterparts elicited similar patterns of activation, which could be used to decode the conceptual category (animate or inanimate), even for the unrecognizable scrambled objects. Animacy decoding for the scrambled objects, however, was only possible at the slowest periodic presentation rate. Animacy decoding for intact objects was faster, more robust, and could be achieved at faster presentation rates. Our results confirm that the mid-level visual features preserved in the scrambled objects contribute to animacy decoding, but also demonstrate that the dynamics vary markedly for intact versus scrambled objects. Our findings suggest a complex interplay between visual feature coding and categorical representations that is mediated by the visual system’s capacity to use image features to resolve a recognisable object.
Publisher: Elsevier BV
Date: 10-2022
DOI: 10.1016/J.VISRES.2022.108079
Abstract: Can we trust our eyes? Until recently, we rarely had to question whether what we see is indeed what exists, but this is changing. Artificial neural networks can now generate realistic images that challenge our perception of what is real. This new reality can have significant implications for cybersecurity, counterfeiting, fake news, and border security. We investigated how the human brain encodes and interprets realistic artificially generated images using behaviour and brain imaging. We found that we could reliably decode AI generated faces using people's neural activity. However, while at a group level people performed near chance classifying real and realistic fakes, participants tended to interchange the labels, classifying real faces as realistic fakes and vice versa. Understanding this difference between brain and behavioural responses may be key in determining the 'real' in our new reality. Stimuli, code, and data for this study can be found at osf.io/n2z73/.
Publisher: Oxford University Press (OUP)
Date: 2023
DOI: 10.1093/NC/NIAD018
Abstract: Mental imagery is a process by which thoughts become experienced with sensory characteristics. Yet, it is not clear why mental images appear diminished compared to veridical images, nor how mental images are phenomenologically distinct from hallucinations, another type of non-veridical sensory experience. Current evidence suggests that imagination and veridical perception share neural resources. If so, we argue that considering how neural representations of externally generated stimuli (i.e. sensory input) and internally generated stimuli (i.e. thoughts) might interfere with one another can sufficiently differentiate between veridical, imaginary, and hallucinatory perception. We here use a simple computational model of a serially connected, hierarchical network with bidirectional information flow to emulate the primate visual system. We show that modelling even first approximations of neural competition can more coherently explain imagery phenomenology than non-competitive models. Our simulations predict that, without competing sensory input, imagined stimuli should ubiquitously dominate hierarchical representations. However, with competition, imagination should dominate high-level representations but largely fail to outcompete sensory inputs at lower processing levels. To interpret our findings, we assume that low-level stimulus information (e.g. in early visual cortices) contributes most to the sensory aspects of perceptual experience, while high-level stimulus information (e.g. towards temporal regions) contributes most to its abstract aspects. Our findings therefore suggest that ongoing bottom-up inputs during waking life may prevent imagination from overriding veridical sensory experience. In contrast, internally generated stimuli may be hallucinated when sensory input is d ened or eradicated. Our approach can explain in idual differences in imagery, along with aspects of daydreaming, hallucinations, and non-visual mental imagery.
Publisher: Elsevier BV
Date: 11-2019
DOI: 10.1016/J.NEUROIMAGE.2019.116083
Abstract: How are visual inputs transformed into conceptual representations by the human visual system? The contents of human perception, such as objects presented on a visual display, can reliably be decoded from voxel activation patterns in fMRI, and in evoked sensor activations in MEG and EEG. A prevailing question is the extent to which brain activation associated with object categories is due to statistical regularities of visual features within object categories. Here, we assessed the contribution of mid-level features to conceptual category decoding using EEG and a novel fast periodic decoding paradigm. Our study used a stimulus set consisting of intact objects from the animate (e.g., fish) and inanimate categories (e.g., chair) and scrambled versions of the same objects that were unrecognizable and preserved their visual features (Long et al., 2018). By presenting the images at different periodic rates, we biased processing to different levels of the visual hierarchy. We found that scrambled objects and their intact counterparts elicited similar patterns of activation, which could be used to decode the conceptual category (animate or inanimate), even for the unrecognizable scrambled objects. Animacy decoding for the scrambled objects, however, was only possible at the slowest periodic presentation rate. Animacy decoding for intact objects was faster, more robust, and could be achieved at faster presentation rates. Our results confirm that the mid-level visual features preserved in the scrambled objects contribute to animacy decoding, but also demonstrate that the dynamics vary markedly for intact versus scrambled objects. Our findings suggest a complex interplay between visual feature coding and categorical representations that is mediated by the visual system's capacity to use image features to resolve a recognisable object.
Publisher: Cold Spring Harbor Laboratory
Date: 04-08-2017
DOI: 10.1101/172353
Abstract: Standard human EEG systems based on spatial Nyquist estimates suggest that 20-30 mm electrode spacing suffices to capture neural signals on the scalp, but recent studies posit that increasing sensor density can provide higher resolution neural information. Here, we compared “super-Nyquist” density EEG (“SND”) with Nyquist density (“ND”) arrays for assessing the spatiotemporal aspects of early visual processing. EEG was measured from 128 electrodes arranged over occipitotemporal brain regions (14 mm spacing) while participants viewed flickering checkerboard stimuli. Analyses compared SND with ND-equivalent subsets of the same electrodes. Frequency-tagged stimuli were classified more accurately with SND than ND arrays in both the time and the frequency domains. Representational similarity analysis revealed that a computational model of V1 correlated more highly with the SND than the ND array. Overall, SND EEG captured more neural information from visual cortex, arguing for increased development of this approach in basic and translational neuroscience. SND “super-Nyquist” density EEG (smaller than 20-30 mm electrode spacing) ND Nyquist density EEG (20-30 mm electrode spacing)
Publisher: MIT Press - Journals
Date: 07-2018
DOI: 10.1162/JOCN_A_01262
Abstract: An evolving view in cognitive neuroscience is that the dorsal visual pathway not only plays a key role in visuomotor behavior but that it also contributes functionally to the recognition of objects. To characterize the nature of the object representations derived by the dorsal pathway, we assessed perceptual performance in the context of the continuous flash suppression paradigm, which suppresses object processing in the ventral pathway while sparing computation in the dorsal pathway. In a series of experiments, prime stimuli, which were rendered imperceptible by the continuous flash suppression, still contributed to perceptual decisions related to the subsequent perceptible target stimuli. However, the contribution of the prime to perception was contingent on the prime's structural coherence, in that a perceptual advantage was observed only for targets primed by objects with legitimate 3-D structure. Finally, we obtained additional evidence to demonstrate that the processing of the suppressed objects was contingent on the magnocellular, rather than the parvocellular, system, further linking the processing of the suppressed stimuli to the dorsal pathway. Together, these results provide novel evidence that the dorsal pathway does not only support visuomotor control but, rather, that it also derives the structural description of 3-D objects and contributes to shape perception.
Publisher: Cold Spring Harbor Laboratory
Date: 30-01-2019
DOI: 10.1101/533513
Abstract: The ability to rapidly and accurately recognise complex objects is a crucial function of the human visual system. To recognise an object, we need to bind incoming visual features such as colour and form together into cohesive neural representations and integrate these with our pre-existing knowledge about the world. For some objects, typical colour is a central feature for recognition for ex le, a banana is typically yellow. Here, we applied multivariate pattern analysis on time-resolved neuroimaging (magnetoencephalography) data to examine how object-colour knowledge affects emerging object representations over time. Our results from 20 participants (11 female) show that the typicality of object-colour combinations influences object representations, although not at the initial stages of object and colour processing. We find evidence that colour decoding peaks later for atypical object-colour combinations in comparison to typical object-colour combinations, illustrating the interplay between processing incoming object features and stored object-knowledge. Taken together, these results provide new insights into the integration of incoming visual information with existing conceptual object knowledge. To recognise objects, we have to be able to bind object features such as colour and shape into one coherent representation and compare it to stored object knowledge. The magnetoencephalography data presented here provide novel insights about the integration of incoming visual information with our knowledge about the world. Using colour as a model to understand the interaction between seeing and knowing, we show that there is a unique pattern of brain activity for congruently coloured objects (e.g., a yellow banana) relative to incongruently coloured objects (e.g., a red banana). This effect of object-colour knowledge only occurs after single object features are processed, demonstrating that conceptual knowledge is accessed relatively late in the visual processing hierarchy.
Publisher: Cold Spring Harbor Laboratory
Date: 03-09-2022
DOI: 10.1101/2022.09.02.506121
Abstract: Mental imagery is a process by which thoughts become experienced with sensory characteristics. Yet, it is not clear why mental images appear diminished compared to veridical images, nor how mental images are phenomenologically distinct from hallucinations, another type of non-veridical sensory experience. Current evidence suggests that imagination and veridical perception share neural resources. If so, we argue that considering how neural representations of externally-generated stimuli (i.e. sensory input) and internally-generated stimuli (i.e. thoughts) might interfere with one another can sufficiently differentiate veridical, imaginary, and hallucinatory perception. We here use a simple computational model of a serially-connected, hierarchical network with bidirectional information flow to emulate the primate visual system. We show that modelling even first-approximations of neural competition can more coherently explain imagery phenomenology than non-competitive models. Our simulations predict that, without competing sensory input, imagined stimuli should ubiquitously dominate hierarchical representations. However, with competition, imagination should dominate high-level representations but largely fail to outcompete sensory inputs at lower processing levels. To interpret our findings, we assume low-level stimulus information (e.g. in early visual cortex) contributes most to the sensory aspects of perceptual experience, while high-level stimulus information (e.g. towards temporal regions) contributes most to its abstract aspects. Our findings therefore suggest that ongoing bottom-up inputs during waking life may prevent imagination from overriding veridical sensory experience. In contrast, internally-generated stimuli may be hallucinated when sensory input is d ened or eradicated. Our approach can explain in idual differences in imagery, along with aspects of daydreaming, hallucinations, and non-visual mental imagery.
Publisher: Cold Spring Harbor Laboratory
Date: 26-06-2020
DOI: 10.1101/2020.06.25.172643
Abstract: The human brain prioritises relevant sensory information to perform different tasks. Enhancement of task-relevant information requires flexible allocation of attentional resources, but it is still a mystery how this is operationalised in the brain. We investigated how attentional mechanisms operate in situations where multiple stimuli are presented in the same location and at the same time. In two experiments, participants performed a challenging two-back task on different types of visual stimuli that were presented simultaneously and superimposed over each other. Using electroencephalography and multivariate decoding, we analysed the effect of attention on the neural responses to each in idual stimulus. Whole brain neural responses contained considerable information about both the attended and unattended stimuli, even though they were presented simultaneously and represented in overlapping receptive fields. As expected, attention increased the decodability of stimulus-related information contained in the neural responses, but this effect was evident earlier for stimuli that were presented at smaller sizes. Our results show that early neural responses to stimuli in fast-changing displays contain remarkable information about the sensory environment but are also modulated by attention in a manner dependent on perceptual characteristics of the relevant stimuli. Stimuli, code, and data for this study can be found at osf.io/7zhwp/ .
Publisher: MIT Press - Journals
Date: 04-2015
DOI: 10.1162/JOCN_A_00732
Abstract: Sensory information is initially registered within anatomically and functionally segregated brain networks but is also integrated across modalities in higher cortical areas. Although considerable research has focused on uncovering the neural correlates of multisensory integration for the modalities of vision, audition, and touch, much less attention has been devoted to understanding interactions between vision and olfaction in humans. In this study, we asked how odors affect neural activity evoked by images of familiar visual objects associated with characteristic smells. We employed scalp-recorded EEG to measure visual ERPs evoked by briefly presented pictures of familiar objects, such as an orange, mint leaves, or a rose. During presentation of each visual stimulus, participants inhaled either a matching odor, a nonmatching odor, or plain air. The N1 component of the visual ERP was significantly enhanced for matching odors in women, but not in men. This is consistent with evidence that women are superior in detecting, discriminating, and identifying odors and that they have a higher gray matter concentration in olfactory areas of the OFC. We conclude that early visual processing is influenced by olfactory cues because of associations between odors and the objects that emit them, and that these associations are stronger in women than in men.
Publisher: Cold Spring Harbor Laboratory
Date: 18-05-2023
DOI: 10.1101/2023.05.15.540306
Abstract: Although mental imagery is often studied as a visual phenomenon, it can occur in any sensory modality. Given that mental images may recruit similar modality-specific neural systems to those which support veridical perception, the properties of mental images may be constrained by the modality in which they are experienced. Yet, little is known about how mental images are experienced at all, let alone how such experiences may vary depending on the modality in which they occur. Here we explored how mental images are experienced in different modalities using an extensive questionnaire. Mainly focusing on visual and auditory mental imagery, we surveyed participants on if and how they experienced their thought content in a sensory way when thinking about the appearance or sound of the letter “O”. Specifically, we investigated temporal properties of imagined content (e.g. onset latency, duration), as well as spatial properties (e.g. apparent location), effort (e.g. ease, spontaneity, control), dependence on body movements (e.g. eye movements), interactions between real and imagined content (e.g. inner speech during reading), the perceived normality of imagery experiences, and how participants labeled their own experiences. Participants also ranked their mental imagery experiences in the five traditional sensory modalities and reported on the involvement of each modality during their thoughts, imagination, and dreams. Confidence ratings were taken for every answer recorded. Overall, visual and auditory experiences tended to dominate mental events relative to other sensory modalities. However, most people reported that auditory mental imagery was superior to visual mental imagery on almost every metric tested, except with respect to spatial properties. Our findings suggest that mental images are restrained in a similar matter to other modality-specific sensory processes in the brain. Broadly, our work also provides a wealth of insights and observations into how mental images are experienced by in iduals, acting as a useful resource for future investigations.
Publisher: Springer Science and Business Media LLC
Date: 15-06-2016
DOI: 10.3758/S13414-016-1157-9
Abstract: Recent evidence suggests that olfactory stimuli can influence early stages of visual processing, but there has been little focus on whether such olfactory-visual interactions convey an advantage in visual object identification. Moreover, despite evidence that some aspects of olfactory perception are superior in females than males, no study to date has examined whether olfactory influences on vision are gender-dependent. We asked whether inhalation of familiar odorants can modulate participants' ability to identify briefly flashed images of matching visual objects under conditions of object substitution masking (OSM). Across two experiments, we had male and female participants (N = 36 in each group) identify masked visual images of odour-related objects (e.g., orange, rose, mint) amongst nonodour-related distracters (e.g., box, watch). In each trial, participants inhaled a single odour that either matched or mismatched the masked, odour-related target. Target detection performance was analysed using a signal detection (d') approach. In females, but not males, matching odours significantly reduced OSM relative to mismatching odours, suggesting that familiar odours can enhance the salience of briefly presented visual objects. We conclude that olfactory cues exert a subtle influence on visual processes by transiently enhancing the salience of matching object representations. The results add to a growing body of literature that points towards consistent gender differences in olfactory perception.
Publisher: Cold Spring Harbor Laboratory
Date: 27-04-2023
DOI: 10.1101/2023.04.26.538486
Abstract: The basic computations performed in the human early visual cortex are the foundation for visual perception. While we know a lot about these computations from work in non-human animals, a key missing piece is how the coding of visual features relates to our perceptual experience. To investigate visual feature coding, interactions, and their relationship to human perception, we investigated neural responses and perceptual similarity judgements to a large set of visual stimuli that varied parametrically along four feature dimensions. We measured neural responses using electroencephalography (N=16) to 256 grating stimuli that varied in orientation, spatial frequency, contrast, and colour. We then mapped the response profiles of the neural coding of each visual feature and their interactions, and related these to independently obtained behavioural judgements of stimulus similarity. The results confirmed fundamental principles of feature coding in the visual system, such that all four features were processed simultaneously but differed in their dynamics, and there was distinctive conjunction coding for different combinations of features in the neural responses. Importantly, modelling of the behaviour revealed that every feature contributed to perceptual experience, despite the untargeted nature of the behavioural task. Further, the relationship between neural coding and behaviour was evident from initial processing stages, signifying that the fundamental features, not just their interactions, are crucial for perceptual experience. This study highlights the importance of understanding how feature coding progresses through the visual hierarchy and the relationship between different stages of processing and perception.
Publisher: MIT Press
Date: 05-03-2022
DOI: 10.1162/JOCN_A_01818
Abstract: The human brain is extremely flexible and capable of rapidly selecting relevant information in accordance with task goals. Regions of frontoparietal cortex flexibly represent relevant task information such as task rules and stimulus features when participants perform tasks successfully, but less is known about how information processing breaks down when participants make mistakes. This is important for understanding whether and when information coding recorded with neuroimaging is directly meaningful for behavior. Here, we used magnetoencephalography to assess the temporal dynamics of information processing and linked neural responses with goal-directed behavior by analyzing how they changed on behavioral error. Participants performed a difficult stimulus–response task using two stimulus–response mapping rules. We used time-resolved multivariate pattern analysis to characterize the progression of information coding from perceptual information about the stimulus, cue and rule coding, and finally, motor response. Response-aligned analyses revealed a r ing up of perceptual information before a correct response, suggestive of internal evidence accumulation. Strikingly, when participants made a stimulus-related error, and not when they made other types of errors, patterns of activity initially reflected the stimulus presented, but later reversed, and accumulated toward a representation of the “incorrect” stimulus. This suggests that the patterns recorded at later time points reflect an internally generated stimulus representation that was used to make the (incorrect) decision. These results illustrate the orderly and overlapping temporal dynamics of information coding in perceptual decision-making and show a clear link between neural patterns in the late stages of processing and behavior.
Publisher: Cold Spring Harbor Laboratory
Date: 25-05-2021
DOI: 10.1101/2021.05.24.445376
Abstract: Selective attention prioritises relevant information amongst competing sensory input. Time-resolved electrophysiological studies have shown stronger representation of attended compared to unattended stimuli, which has been interpreted as an effect of attention on information coding. However, because attention is often manipulated by making only the attended stimulus a target to be remembered and/or responded to, many reported attention effects have been confounded with target-related processes such as visual short-term memory or decision-making. In addition, the effects of attention could be influenced by temporal expectation. The aim of this study was to investigate the dynamic effect of attention on visual processing using multivariate pattern analysis of electroencephalography (EEG) data, while 1) controlling for target-related confounds, and 2) directly investigating the influence of temporal expectation. Participants viewed rapid sequences of overlaid oriented grating pairs at fixation while detecting a “target” grating of a particular orientation. We manipulated attention, one grating was attended and the other ignored, and temporal expectation, with stimulus onset timing either predictable or not. We controlled for target-related processing confounds by only analysing non-target trials. Both attended and ignored gratings were initially coded equally in the pattern of responses across EEG sensors. An effect of attention, with preferential coding of the attended stimulus, emerged approximately 230ms after stimulus onset. This attention effect occurred even when controlling for target-related processing confounds, and regardless of stimulus onset predictability. These results provide insight into the effect of attention on the dynamic processing of competing visual information, presented at the same time and location.
Publisher: Cold Spring Harbor Laboratory
Date: 04-06-2021
DOI: 10.1101/2021.06.03.447008
Abstract: The neural basis of object recognition and semantic knowledge has been extensively studied but the high dimensionality of object space makes it challenging to develop overarching theories on how the brain organises object knowledge. To help understand how the brain allows us to recognise, categorise, and represent objects and object categories, there is a growing interest in using large-scale image databases for neuroimaging experiments. In the current paper, we present THINGS-EEG, a dataset containing human electroencephalography responses from 50 subjects to 1,854 object concepts and 22,248 images in the THINGS stimulus set, a manually curated and high-quality image database that was specifically designed for studying human vision. The THINGS-EEG dataset provides neuroimaging recordings to a systematic collection of objects and concepts and can therefore support a wide array of research to understand visual object processing in the human brain.
Publisher: Frontiers Media SA
Date: 08-07-2021
DOI: 10.3389/FNHUM.2021.682661
Abstract: A large number of papers in Computational Cognitive Neuroscience are developing and testing novel analysis methods using one specific neuroimaging dataset and problematic experimental stimuli. Publication bias and confirmatory exploration will result in overfitting to the limited available data. We highlight the problems with this specific dataset and argue for the need to collect more good quality open neuroimaging data using a variety of experimental stimuli, in order to test the generalisability of current published results, and allow for more robust results in future work.
Publisher: Elsevier BV
Date: 03-2019
DOI: 10.1016/J.NEUROIMAGE.2018.12.046
Abstract: In our daily lives, we are bombarded with a stream of rapidly changing visual input. Humans have the remarkable capacity to detect and identify objects in fast-changing scenes. Yet, when studying brain representations, stimuli are generally presented in isolation. Here, we studied the dynamics of human vision using a combination of fast stimulus presentation rates, electroencephalography and multivariate decoding analyses. Using a presentation rate of 5 images per second, we obtained the representational structure of a large number of stimuli, and showed the emerging abstract categorical organisation of this structure. Furthermore, we could separate the temporal dynamics of perceptual processing from higher-level target selection effects. In a second experiment, we used the same paradigm at 20Hz to show that shorter image presentation limits the categorical abstraction of object representations. Our results show that applying multivariate pattern analysis to every image in rapid serial visual processing streams has unprecedented potential for studying the temporal dynamics of the structure of representations in the human visual system.
Publisher: MDPI AG
Date: 18-12-2018
Abstract: Visual recognition deficits are the hallmark symptom of visual agnosia, a neuropsychological disorder typically associated with damage to the visual system. Most research into visual agnosia focuses on characterizing the deficits through detailed behavioral testing, and structural and functional brain scans are used to determine the spatial extent of any cortical damage. Although the hierarchical nature of the visual system leads to clear predictions about the temporal dynamics of cortical deficits, there has been little research on the use of neuroimaging methods with high temporal resolution to characterize the temporal profile of agnosia deficits. Here, we employed high-density electroencephalography (EEG) to investigate alterations in the temporal dynamics of the visual system in two in iduals with visual agnosia. In the context of a steady state visual evoked potential paradigm (SSVEP), in iduals viewed pattern-reversing checkerboards of differing spatial frequency, and we assessed the responses of the visual system in the frequency and temporal domain. JW, a patient with early visual cortex damage, showed impaired SSVEP response relative to a control group and to the second patient (SM) who had right temporal lobe damage. JW also showed lower decoding accuracy for early visual responses (around 100 ms). SM, whose lesion is more anterior in the visual system, showed good decoding accuracy initially but low decoding after 500 ms. Overall, EEG and multivariate decoding methods can yield important insights into the temporal dynamics of visual responses in in iduals with visual agnosia.
Publisher: Elsevier BV
Date: 08-2019
DOI: 10.1016/J.NEUROIMAGE.2019.04.050
Abstract: Rapid image presentations combined with time-resolved multivariate analysis methods of EEG or MEG (rapid-MVPA) offer unique potential in assessing the temporal limitations of the human visual system. Recent work has shown that multiple visual objects presented sequentially can be simultaneously decoded from M/EEG recordings. Interestingly, object representations reached higher stages of processing for slower image presentation rates compared to fast rates. This fast rate attenuation is probably caused by forward and backward masking from the other images in the stream. Two factors that are likely to influence masking during rapid streams are stimulus duration and stimulus onset asynchrony (SOA). Here, we disentangle these effects by studying the emerging neural representation of visual objects using rapid-MVPA while independently manipulating stimulus duration and SOA. Our results show that longer SOAs enhance the decodability of neural representations, regardless of stimulus presentation duration, suggesting that subsequent images act as effective backward masks. In contrast, image duration does not appear to have a graded influence on object representations. Interestingly, however, decodability was improved when there was a gap between subsequent images, indicating that an abrupt onset or offset of an image enhances its representation. Our study yields insight into the dynamics of object processing in rapid streams, paving the way for future work using this promising approach.
Publisher: American Psychological Association (APA)
Date: 07-2017
DOI: 10.1037/XGE0000302
Abstract: Words and faces have vastly different visual properties, but increasing evidence suggests that word and face processing engage overlapping distributed networks. For instance, fMRI studies have shown overlapping activity for face and word processing in the fusiform gyrus despite well-characterized lateralization of these objects to the left and right hemispheres, respectively. To investigate whether face and word perception influences perception of the other stimulus class and elucidate the mechanisms underlying such interactions, we presented images using rapid serial visual presentations. Across 3 experiments, participants discriminated 2 face, word, and glasses targets (T1 and T2) embedded in a stream of images. As expected, T2 discrimination was impaired when it followed T1 by 200 to 300 ms relative to longer intertarget lags, the so-called attentional blink. Interestingly, T2 discrimination accuracy was significantly reduced at short intertarget lags when a face was followed by a word (face-word) compared with glasses-word and word-word combinations, indicating that face processing interfered with word perception. The reverse effect was not observed that is, word-face performance was no different than the other object combinations. EEG results indicated the left N170 to T1 was correlated with the word decrement for face-word trials, but not for other object combinations. Taken together, the results suggest face processing interferes with word processing, providing evidence for overlapping neural mechanisms of these 2 object types. Furthermore, asymmetrical face-word interference points to greater overlap of face and word representations in the left than the right hemisphere. (PsycINFO Database Record
Publisher: Cold Spring Harbor Laboratory
Date: 09-01-2019
DOI: 10.1101/515619
Abstract: Rapid image presentations combined with time-resolved multivariate analysis methods of EEG or MEG (rapid-MVPA) offer unique potential in assessing the temporal limitations of the human visual system. Recent work has shown that multiple visual objects presented sequentially can be simultaneously decoded from M/EEG recordings. Interestingly, object representations reached higher stages of processing for slower image presentation rates compared to fast rates. This fast rate attenuation is probably caused by forward and backward masking from the other images in the stream. Two factors that are likely to influence masking during rapid streams are stimulus duration and stimulus onset asynchrony (SOA). Here, we disentangle these effects by studying the emerging neural representation of visual objects using rapid-MVPA while independently manipulating stimulus duration and SOA. Our results show that longer SOAs enhance the decodability of neural representations, regardless of stimulus presentation duration, suggesting that subsequent images act as effective backward masks. In contrast, image duration does not appear to have a graded influence on object representations. Interestingly, however, decodability was improved when there was a gap between subsequent images, indicating that an abrupt onset or offset of an image enhances its representation. Our study yields insight into the dynamics of object processing in rapid streams, paving the way for future work using this promising approach.
Publisher: Elsevier BV
Date: 11-2018
Publisher: Cold Spring Harbor Laboratory
Date: 09-04-2020
DOI: 10.1101/2020.04.08.032888
Abstract: Classic models of predictive coding propose that sensory systems use information retained from prior experience to predict current sensory input. Any mismatch between predicted and current input (prediction error) is then fed forward up the hierarchy leading to a revision of the prediction. We tested this hypothesis in the domain of object vision using a combination of multivariate pattern analysis and time-resolved electroencephalography. We presented participants with sequences of images that stepped around fixation in a predictable order. On the majority of presentations, the images conformed to a consistent pattern of position order and object category order, however, on a subset of presentations the last image in the sequence violated the established pattern by either violating the predicted category or position of the object. Contrary to classic predictive coding when decoding position and category we found no differences in decoding accuracy between predictable and violation conditions. However, consistent with recent extensions of predictive coding, exploratory analyses showed that a greater proportion of predictions was made to the forthcoming position in the sequence than to either the previous position or the position behind the previous position suggesting that the visual system actively anticipates future input as opposed to just inferring current input.
Publisher: Cold Spring Harbor Laboratory
Date: 17-05-2019
DOI: 10.1101/637603
Abstract: Mental imagery is the ability to generate images in the mind in the absence of sensory input. Both perceptual visual processing and internally generated imagery engage large, overlapping networks of brain regions. However, it is unclear whether they are characterized by similar temporal dynamics. Recent magnetoencephalography work has shown that object category information was decodable from brain activity during mental imagery, but the timing was delayed relative to perception. The current study builds on these findings, using electroencephalography to investigate the dynamics of mental imagery. Sixteen participants viewed two images of the Sydney Harbour Bridge and two images of Santa Claus. On each trial, they viewed a sequence of the four images and were asked to imagine one of them, which was cued retroactively by its temporal location in the sequence. Time-resolved multivariate pattern analysis was used to decode the viewed and imagined stimuli. Our results indicate that the dynamics of imagery processes are more variable across, and within, participants compared to perception of physical stimuli. Although category and exemplar information was decodable for viewed stimuli, there were no informative patterns of activity during mental imagery. The current findings suggest stimulus complexity, task design and in idual differences may influence the ability to successfully decode imagined images. We discuss the implications of these results for our understanding of the neural processes underlying mental imagery.
Publisher: Springer Science and Business Media LLC
Date: 10-01-2022
DOI: 10.1038/S41597-021-01102-7
Abstract: The neural basis of object recognition and semantic knowledge has been extensively studied but the high dimensionality of object space makes it challenging to develop overarching theories on how the brain organises object knowledge. To help understand how the brain allows us to recognise, categorise, and represent objects and object categories, there is a growing interest in using large-scale image databases for neuroimaging experiments. In the current paper, we present THINGS-EEG, a dataset containing human electroencephalography responses from 50 subjects to 1,854 object concepts and 22,248 images in the THINGS stimulus set, a manually curated and high-quality image database that was specifically designed for studying human vision. The THINGS-EEG dataset provides neuroimaging recordings to a systematic collection of objects and concepts and can therefore support a wide array of research to understand visual object processing in the human brain.
Publisher: Springer Science and Business Media LLC
Date: 24-11-2017
DOI: 10.1038/S41598-017-16377-3
Abstract: Standard human EEG systems based on spatial Nyquist estimates suggest that 20–30 mm electrode spacing suffices to capture neural signals on the scalp, but recent studies posit that increasing sensor density can provide higher resolution neural information. Here, we compared “super-Nyquist” density EEG (“SND”) with Nyquist density (“ND”) arrays for assessing the spatiotemporal aspects of early visual processing. EEG was measured from 128 electrodes arranged over occipitotemporal brain regions (14 mm spacing) while participants viewed flickering checkerboard stimuli. Analyses compared SND with ND-equivalent subsets of the same electrodes. Frequency-tagged stimuli were classified more accurately with SND than ND arrays in both the time and the frequency domains. Representational similarity analysis revealed that a computational model of V1 correlated more highly with the SND than the ND array. Overall, SND EEG captured more neural information from visual cortex, arguing for increased development of this approach in basic and translational neuroscience.
Publisher: Springer Science and Business Media LLC
Date: 28-04-2022
DOI: 10.1038/S41598-022-10687-X
Abstract: Selective attention prioritises relevant information amongst competing sensory input. Time-resolved electrophysiological studies have shown stronger representation of attended compared to unattended stimuli, which has been interpreted as an effect of attention on information coding. However, because attention is often manipulated by making only the attended stimulus a target to be remembered and/or responded to, many reported attention effects have been confounded with target-related processes such as visual short-term memory or decision-making. In addition, attention effects could be influenced by temporal expectation about when something is likely to happen. The aim of this study was to investigate the dynamic effect of attention on visual processing using multivariate pattern analysis of electroencephalography (EEG) data, while (1) controlling for target-related confounds, and (2) directly investigating the influence of temporal expectation. Participants viewed rapid sequences of overlaid oriented grating pairs while detecting a “target” grating of a particular orientation. We manipulated attention, one grating was attended and the other ignored (cued by colour), and temporal expectation, with stimulus onset timing either predictable or not. We controlled for target-related processing confounds by only analysing non-target trials. Both attended and ignored gratings were initially coded equally in the pattern of responses across EEG sensors. An effect of attention, with preferential coding of the attended stimulus, emerged approximately 230 ms after stimulus onset. This attention effect occurred even when controlling for target-related processing confounds, and regardless of stimulus onset expectation. These results provide insight into the effect of feature-based attention on the dynamic processing of competing visual information.
Publisher: The Neurons Behavior Data Analysis and Theory collective
Date: 04-02-2021
DOI: 10.51628/001C.19129
Abstract: Humans can covertly track the position of an object, even if the object is temporarily occluded. What are the neural mechanisms underlying our capacity to track moving objects when there is no physical stimulus for the brain to track? One possibility is that the brain ‘fills-in’ information about imagined objects using internally generated representations similar to those generated by feed-forward perceptual mechanisms. Alternatively, the brain might deploy a higher order mechanism, for ex le using an object tracking model that integrates visual signals and motion dynamics. In the present study, we used EEG and time-resolved multivariate pattern analyses to investigate the spatial processing of visible and imagined objects. Participants tracked an object that moved in discrete steps around fixation, occupying six consecutive locations. They were asked to imagine that the object continued on the same trajectory after it disappeared and move their attention to the corresponding positions. Time-resolved decoding of EEG data revealed that the location of the visible stimuli could be decoded shortly after image onset, consistent with early retinotopic visual processes. For processing of unseen/imagined positions, the patterns of neural activity resembled stimulus-driven mid-level visual processes, but were detected earlier than perceptual mechanisms, implicating an anticipatory and more variable tracking mechanism. Encoding models revealed that spatial representations were much weaker for imagined than visible stimuli. Monitoring the position of imagined objects thus utilises similar perceptual and attentional processes as monitoring objects that are actually present, but with different temporal dynamics. These results indicate that internally generated representations rely on top-down processes, and their timing is influenced by the predictability of the stimulus.
Publisher: Cold Spring Harbor Laboratory
Date: 17-08-2018
DOI: 10.1101/394148
Abstract: In our daily lives, we are bombarded with a stream of rapidly changing visual input. Humans have the remarkable capacity to detect and identify objects in fast-changing scenes. Yet, when studying brain representations, stimuli are generally presented in isolation. Here, we studied the dynamics of human vision using a combination of fast stimulus presentation rates, electroencephalography and multivariate decoding analyses. Using a presentation rate of 5 images per second, we obtained the representational structure of a large number of stimuli, and showed the emerging abstract categorical organisation of this structure. Furthermore, we could separate the temporal dynamics of perceptual processing from higher-level target selection effects. In a second experiment, we used the same paradigm at 20Hz to show that shorter image presentation limits the categorical abstraction of object representations. Our results show that applying multivariate pattern analysis to every image in rapid serial visual processing streams has unprecedented potential for studying the temporal dynamics of the structure of representations in the human visual system.
Publisher: Cold Spring Harbor Laboratory
Date: 03-03-2020
DOI: 10.1101/2020.03.02.974162
Abstract: Humans can covertly track the position of an object, even if the object is temporarily occluded. What are the neural mechanisms underlying our capacity to track moving objects when there is no physical stimulus for the brain to track? One possibility is that the brain “fills-in” information about imagined objects using internally generated representations similar to those generated by feed-forward perceptual mechanisms. Alternatively, the brain might deploy a higher order mechanism, for ex le using an object tracking model that integrates visual signals and motion dynamics (Kwon et al., 2015). In the present study, we used electroencephalography (EEG) and time-resolved multivariate pattern analyses to investigate the spatial processing of visible and imagined objects. Participants tracked an object that moved in discrete steps around fixation, occupying six consecutive locations. They were asked to imagine that the object continued on the same trajectory after it disappeared and move their attention to the corresponding positions. Time-resolved decoding of EEG data revealed that the location of the visible stimuli could be decoded shortly after image onset, consistent with early retinotopic visual processes. For processing of unseen/imagined positions, the patterns of neural activity resembled stimulus-driven mid-level visual processes, but were detected earlier than perceptual mechanisms, implicating an anticipatory and more variable tracking mechanism. Encoding models revealed that spatial representations were much weaker for imagined than visible stimuli. Monitoring the position of imagined objects thus utilises similar perceptual and attentional processes as monitoring objects that are actually present, but with different temporal dynamics. These results indicate that internally generated representations rely on top-down processes, and their timing is influenced by the predictability of the stimulus. All data and analysis code for this study are available at osf.io/8v47t/ .
Start Date: 2020
End Date: 12-2024
Amount: $420,556.00
Funder: Australian Research Council
View Funded Activity