ORCID Profile
0000-0002-3953-4195
Current Organisations
University of Warwick
,
University of Sydney
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
In Research Link Australia (RLA), "Research Topics" refer to ANZSRC FOR and SEO codes. These topics are either sourced from ANZSRC FOR and SEO codes listed in researchers' related grants or generated by a large language model (LLM) based on their publications.
Neurocognitive Patterns and Neural Networks | Sensory Processes, Perception and Performance | Cognitive Science | Decision Making | Psychology | Sensory Systems | Knowledge Representation and Machine Learning | Computer Perception, Memory and Attention
Publisher: Society for Neuroscience
Date: 23-07-2020
Publisher: Elsevier BV
Date: 03-2008
DOI: 10.1016/J.VISRES.2007.12.019
Abstract: An object moving in discrete steps can appear to move continuously even along sections of the path in which no stimulus is presented. We investigated whether the internal representation of such an object is constructed by extrapolation, along the expected trajectory of the object, or by interpolation, after the subsequent reappearance of the object. Observers viewed two discs moving in an unambiguous apparent motion display, which either occasionally reversed direction or continued moving along the predicted path. Observers carried out a speeded 2AFC task on probes presented between the possible disc locations. In the continuous condition, observers' reaction times to detect and identify a probe were longer when it occurred ahead of the disc than when it occurred elsewhere on the motion path. Conversely, when the disc reversed direction, significantly less interference was observed ahead of the disc (along the predicted motion path), and significantly more interference was observed behind the disc (along the updated motion path). We conclude that the representation of a moving object in an apparent motion display is constructed by interpolation as well as extrapolation. We demonstrate that this representation is maintained and updated even outside the locus of focused attention, and that it is possible to dissociate the contributions of interpolation and extrapolation mechanisms to an object's representation.
Publisher: Proceedings of the National Academy of Sciences
Date: 14-08-2007
Abstract: Our conscious experience is of a seamless visual world, but many of the cortical areas that underlie our capacity for vision have a fragmented or asymmetrical representation of visual space. In fact, the representation of the visual field is fragmented into quadrants at the level of V2, V3, and possibly V4. In theory, this ision could have no functional consequences and therefore no impact on behavior. Contrary to this expectation, we find robust quadrant-level interference effects when attentively tracking two moving targets. Performance improves when target objects appear in separate quadrants (straddling either the horizontal or vertical meridian) compared with when they appear the same distance apart but within a single quadrant. These quadrant-level interference effects would not be predicted by cognitive theories of attention and tracking that do not take anatomical constraints into account. Quadrant-level interference strongly suggests that cortical areas containing a noncontiguous representation of the four quadrants of the visual field (i.e., V2, V3, and V4) impose an important constraint on attentional selection and attentive tracking.
Publisher: Elsevier BV
Date: 10-2018
DOI: 10.1016/J.NEUROIMAGE.2017.08.019
Abstract: The application of machine learning methods to neuroimaging data has fundamentally altered the field of cognitive neuroscience. Future progress in understanding brain function using these methods will require addressing a number of key methodological and interpretive challenges. Because these challenges often remain unseen and metaphorically "haunt" our efforts to use these methods to understand the brain, we refer to them as "ghosts". In this paper, we describe three such ghosts, situate them within a more general framework from philosophy of science, and then describe steps to address them. The first ghost arises from difficulties in determining what information machine learning classifiers use for decoding. The second ghost arises from the interplay of experimental design and the structure of information in the brain - that is, our methods embody implicit assumptions about information processing in the brain, and it is often difficult to determine if those assumptions are satisfied. The third ghost emerges from our limited ability to distinguish information that is merely decodable from the brain from information that is represented and used by the brain. Each of the three ghosts place limits on the interpretability of decoding research in cognitive neuroscience. There are no easy solutions, but facing these issues squarely will provide a clearer path to understanding the nature of representation and computation in the human brain.
Publisher: Elsevier BV
Date: 04-2010
Publisher: Cold Spring Harbor Laboratory
Date: 10-10-2017
DOI: 10.1101/200873
Abstract: How is emotion represented in the brain: is it categorical or along dimensions? In the present study, we applied multivariate pattern analysis (MVPA) to magnetoencephalography (MEG) to study the brain’s temporally unfolding representations of different emotion constructs. First, participants rated 525 images on the dimensions of valence and arousal and by intensity of discrete emotion categories (happiness, sadness, fear, disgust, and sadness). Thirteen new participants then viewed subsets of these images within an MEG scanner. We used Representational Similarity Analysis (RSA) to compare behavioral ratings to the unfolding neural representation of the stimuli in the brain. Ratings of valence and arousal explained significant proportions of the MEG data, even after corrections for low-level image properties. Additionally, behavioral ratings of the discrete emotions fear, disgust, and happiness significantly predicted early neural representations, whereas rating models of anger and sadness did not. Different emotion constructs also showed unique temporal signatures. Fear and disgust – both highly arousing and negative – were rapidly discriminated by the brain, but disgust was represented for an extended period of time relative to fear. Overall, our findings suggest that 1) dimensions of valence and arousal are quickly represented by the brain, as are some discrete emotions, and 2) different emotion constructs exhibit unique temporal dynamics. We discuss implications of these findings for theoretical understanding of emotion and for the interplay of discrete and dimensional aspects of emotional experience.
Publisher: Elsevier BV
Date: 09-2004
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 08-2013
DOI: 10.1167/13.10.1
Abstract: Human object recognition is remarkably efficient. In recent years, significant advancements have been made in our understanding of how the brain represents visual objects and organizes them into categories. Recent studies using pattern analyses methods have characterized a representational space of objects in human and primate inferior temporal cortex in which object exemplars are discriminable and cluster according to category (e.g., faces and bodies). In the present study we examined how category structure in object representations emerges in the first 1000 ms of visual processing. In the study, participants viewed 24 object exemplars with a planned categorical structure comprised of four levels ranging from highly specific (in idual exemplars) to highly abstract (animate vs. inanimate), while their brain activity was recorded with magnetoencephalography (MEG). We used a sliding time window decoding approach to decode the exemplar and the exemplar's category that participants were viewing on a moment-to-moment basis. We found exemplar and category membership could be decoded from the neuromagnetic recordings shortly after stimulus onset (<100 ms) with peak decodability following thereafter. Latencies for peak decodability varied systematically with the level of category abstraction with more abstract categories emerging later, indicating that the brain hierarchically constructs category representations. In addition, we examined the stationarity of patterns of activity in the brain that encode object category information and show these patterns vary over time, suggesting the brain might use flexible time varying codes to represent visual object categories.
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 03-11-2011
DOI: 10.1167/11.13.4
Abstract: The "neural correlate" of perceptual awareness is much sought-after. Here, we present an novel approach to the identification of possible neural correlates, in which we exploit the temporal connection that inevitably links the selection process that determines what we become aware of, and the development of awareness itself. Because the speed of selection determines when downstream processes can first become involved in generating awareness, the latency of neural processes provides a way to isolate the neural correlates of awareness. We recorded event-related potentials (ERPs) while observers carried out a visual behavioral task designed to estimate attentional selection latency. We show that within-task trial-by-trial behavioral variability in attentional selection latency correlates to trial-by-trial variability in ERP latency. This was true in a posterior contralateral region, and in central and frontal areas, thereby implicating these as waypoints along which visual information flows on the way to visual awareness.
Publisher: Cold Spring Harbor Laboratory
Date: 12-04-2019
DOI: 10.1101/607499
Abstract: How are visual inputs transformed into conceptual representations by the human visual system? The contents of human perception, such as objects presented on a visual display, can reliably be decoded from voxel activation patterns in fMRI, and in evoked sensor activations in MEG and EEG. A prevailing question is the extent to which brain activation associated with object categories is due to statistical regularities of visual features within object categories. Here, we assessed the contribution of mid-level features to conceptual category decoding using EEG and a novel fast periodic decoding paradigm. Our study used a stimulus set consisting of intact objects from the animate (e.g., fish) and inanimate categories (e.g., chair) and scrambled versions of the same objects that were unrecognizable and preserved their visual features (Long, Yu, & Konkle, 2018). By presenting the images at different periodic rates, we biased processing to different levels of the visual hierarchy. We found that scrambled objects and their intact counterparts elicited similar patterns of activation, which could be used to decode the conceptual category (animate or inanimate), even for the unrecognizable scrambled objects. Animacy decoding for the scrambled objects, however, was only possible at the slowest periodic presentation rate. Animacy decoding for intact objects was faster, more robust, and could be achieved at faster presentation rates. Our results confirm that the mid-level visual features preserved in the scrambled objects contribute to animacy decoding, but also demonstrate that the dynamics vary markedly for intact versus scrambled objects. Our findings suggest a complex interplay between visual feature coding and categorical representations that is mediated by the visual system’s capacity to use image features to resolve a recognisable object.
Publisher: Oxford University Press (OUP)
Date: 2023
DOI: 10.1093/NC/NIAD018
Abstract: Mental imagery is a process by which thoughts become experienced with sensory characteristics. Yet, it is not clear why mental images appear diminished compared to veridical images, nor how mental images are phenomenologically distinct from hallucinations, another type of non-veridical sensory experience. Current evidence suggests that imagination and veridical perception share neural resources. If so, we argue that considering how neural representations of externally generated stimuli (i.e. sensory input) and internally generated stimuli (i.e. thoughts) might interfere with one another can sufficiently differentiate between veridical, imaginary, and hallucinatory perception. We here use a simple computational model of a serially connected, hierarchical network with bidirectional information flow to emulate the primate visual system. We show that modelling even first approximations of neural competition can more coherently explain imagery phenomenology than non-competitive models. Our simulations predict that, without competing sensory input, imagined stimuli should ubiquitously dominate hierarchical representations. However, with competition, imagination should dominate high-level representations but largely fail to outcompete sensory inputs at lower processing levels. To interpret our findings, we assume that low-level stimulus information (e.g. in early visual cortices) contributes most to the sensory aspects of perceptual experience, while high-level stimulus information (e.g. towards temporal regions) contributes most to its abstract aspects. Our findings therefore suggest that ongoing bottom-up inputs during waking life may prevent imagination from overriding veridical sensory experience. In contrast, internally generated stimuli may be hallucinated when sensory input is d ened or eradicated. Our approach can explain in idual differences in imagery, along with aspects of daydreaming, hallucinations, and non-visual mental imagery.
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 07-2009
DOI: 10.1167/9.7.5
Publisher: Elsevier BV
Date: 11-2019
DOI: 10.1016/J.NEUROIMAGE.2019.116083
Abstract: How are visual inputs transformed into conceptual representations by the human visual system? The contents of human perception, such as objects presented on a visual display, can reliably be decoded from voxel activation patterns in fMRI, and in evoked sensor activations in MEG and EEG. A prevailing question is the extent to which brain activation associated with object categories is due to statistical regularities of visual features within object categories. Here, we assessed the contribution of mid-level features to conceptual category decoding using EEG and a novel fast periodic decoding paradigm. Our study used a stimulus set consisting of intact objects from the animate (e.g., fish) and inanimate categories (e.g., chair) and scrambled versions of the same objects that were unrecognizable and preserved their visual features (Long et al., 2018). By presenting the images at different periodic rates, we biased processing to different levels of the visual hierarchy. We found that scrambled objects and their intact counterparts elicited similar patterns of activation, which could be used to decode the conceptual category (animate or inanimate), even for the unrecognizable scrambled objects. Animacy decoding for the scrambled objects, however, was only possible at the slowest periodic presentation rate. Animacy decoding for intact objects was faster, more robust, and could be achieved at faster presentation rates. Our results confirm that the mid-level visual features preserved in the scrambled objects contribute to animacy decoding, but also demonstrate that the dynamics vary markedly for intact versus scrambled objects. Our findings suggest a complex interplay between visual feature coding and categorical representations that is mediated by the visual system's capacity to use image features to resolve a recognisable object.
Publisher: American Physiological Society
Date: 07-2017
Abstract: The middle-temporal area (MT) of primate visual cortex is critical in the analysis of visual motion. Single-unit studies suggest that the response dynamics of neurons within area MT depend on stimulus features, but how these dynamics emerge at the population level, and how feature representations interact, is not clear. Here, we used multivariate classification analysis to study how stimulus features are represented in the spiking activity of populations of neurons in area MT of marmoset monkey. Using representational similarity analysis we distinguished the emerging representations of moving grating and dot field stimuli. We show that representations of stimulus orientation, spatial frequency, and speed are evident near the onset of the population response, while the representation of stimulus direction is slower to emerge and sustained throughout the stimulus-evoked response. We further found a spatiotemporal asymmetry in the emergence of direction representations. Representations for high spatial frequencies and low temporal frequencies are initially orientation dependent, while those for high temporal frequencies and low spatial frequencies are more sensitive to motion direction. Our analyses reveal a complex interplay of feature representations in area MT population response that may explain the stimulus-dependent dynamics of motion vision. NEW & NOTEWORTHY Simultaneous multielectrode recordings can measure population-level codes that previously were only inferred from single-electrode recordings. However, many multielectrode recordings are analyzed using univariate single-electrode analysis approaches, which fail to fully utilize the population-level information. Here, we overcome these limitations by applying multivariate pattern classification analysis and representational similarity analysis to large-scale recordings from middle-temporal area (MT) in marmoset monkeys. Our analyses reveal a dynamic interplay of feature representations in area MT population response.
Publisher: SAGE Publications
Date: 04-2007
DOI: 10.1111/J.1467-9280.2007.01892.X
Abstract: Recent research has shown that four small dots presented in the vicinity of, but not adjacent to, a target stimulus can banish that stimulus from conscious awareness. It is thought that the mental representation of the masked stimulus is “erased” by the trailing quartet of dots. Using functional magnetic resonance adaptation, we show that there is no persisting neural representation of the successfully masked stimulus in lateral occipital cortex, a region that has been implicated in the processing of object structure. This finding rules out the alternative interpretation that a lingering neural representation is merely rendered inaccessible to consciousness, as is the fate, for ex le, of monocular information under conditions of binocular rivalry.
Publisher: SAGE Publications
Date: 18-05-2010
Abstract: When a warrior picks up a sword for battle, do sword and soldier become one? The notion of an extended sense of the body has been the topic of philosophical discussion for more than a century and more recently has been subjected to empirical tests by psychologists and neuroscientists. We used a unique afterimage paradigm to test if, and under what conditions, objects are integrated into an extended body sense. Our experiments provide empirical support for the notion that objects can be integrated into an extended sense of the body. Our findings further indicate that this extended body sense is highly plastic, quickly assimilating objects that are in physical contact with the observer. Finally, we show that this extended body sense is limited to first-order extensions, thus constraining how far one can extend oneself into the environment.
Publisher: Cold Spring Harbor Laboratory
Date: 26-06-2020
DOI: 10.1101/2020.06.25.172643
Abstract: The human brain prioritises relevant sensory information to perform different tasks. Enhancement of task-relevant information requires flexible allocation of attentional resources, but it is still a mystery how this is operationalised in the brain. We investigated how attentional mechanisms operate in situations where multiple stimuli are presented in the same location and at the same time. In two experiments, participants performed a challenging two-back task on different types of visual stimuli that were presented simultaneously and superimposed over each other. Using electroencephalography and multivariate decoding, we analysed the effect of attention on the neural responses to each in idual stimulus. Whole brain neural responses contained considerable information about both the attended and unattended stimuli, even though they were presented simultaneously and represented in overlapping receptive fields. As expected, attention increased the decodability of stimulus-related information contained in the neural responses, but this effect was evident earlier for stimuli that were presented at smaller sizes. Our results show that early neural responses to stimuli in fast-changing displays contain remarkable information about the sensory environment but are also modulated by attention in a manner dependent on perceptual characteristics of the relevant stimuli. Stimuli, code, and data for this study can be found at osf.io/7zhwp/ .
Publisher: Public Library of Science (PLoS)
Date: 24-06-2015
Publisher: Cold Spring Harbor Laboratory
Date: 18-05-2023
DOI: 10.1101/2023.05.15.540306
Abstract: Although mental imagery is often studied as a visual phenomenon, it can occur in any sensory modality. Given that mental images may recruit similar modality-specific neural systems to those which support veridical perception, the properties of mental images may be constrained by the modality in which they are experienced. Yet, little is known about how mental images are experienced at all, let alone how such experiences may vary depending on the modality in which they occur. Here we explored how mental images are experienced in different modalities using an extensive questionnaire. Mainly focusing on visual and auditory mental imagery, we surveyed participants on if and how they experienced their thought content in a sensory way when thinking about the appearance or sound of the letter “O”. Specifically, we investigated temporal properties of imagined content (e.g. onset latency, duration), as well as spatial properties (e.g. apparent location), effort (e.g. ease, spontaneity, control), dependence on body movements (e.g. eye movements), interactions between real and imagined content (e.g. inner speech during reading), the perceived normality of imagery experiences, and how participants labeled their own experiences. Participants also ranked their mental imagery experiences in the five traditional sensory modalities and reported on the involvement of each modality during their thoughts, imagination, and dreams. Confidence ratings were taken for every answer recorded. Overall, visual and auditory experiences tended to dominate mental events relative to other sensory modalities. However, most people reported that auditory mental imagery was superior to visual mental imagery on almost every metric tested, except with respect to spatial properties. Our findings suggest that mental images are restrained in a similar matter to other modality-specific sensory processes in the brain. Broadly, our work also provides a wealth of insights and observations into how mental images are experienced by in iduals, acting as a useful resource for future investigations.
Publisher: National Institute for Health and Care Research
Date: 08-2023
DOI: 10.3310/TKJY2101
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 13-02-2006
DOI: 10.1167/6.2.4
Publisher: Cold Spring Harbor Laboratory
Date: 16-01-2018
DOI: 10.1101/248583
Abstract: Multivariate decoding methods applied to neuroimaging data have become the standard in cognitive neuroscience for unravelling statistical dependencies between brain activation patterns and experimental conditions. The current challenge is to demonstrate that information decoded as such by the experimenter is in fact used by the brain itself to guide behaviour. Here we demonstrate a promising approach to do so in the context of neural activation during object perception and categorisation behaviour. We first localised decodable information about visual objects in the human brain using a spatially-unbiased multivariate decoding analysis. We then related brain activation patterns to behaviour using a machine-learning based extension of signal detection theory. We show that while there is decodable information about visual category throughout the visual brain, only a subset of those representations predicted categorisation behaviour, located mainly in anterior ventral temporal cortex. Our results have important implications for the interpretation of neuroimaging studies, highlight the importance of relating decoding results to behaviour, and suggest a suitable methodology towards this aim.
Publisher: Elsevier BV
Date: 09-2000
DOI: 10.1016/S0960-9822(00)00672-2
Abstract: When two qualitatively different stimuli are presented at the same time, one to each eye, the stimuli can either integrate or compete with each other. When they compete, one of the two stimuli is alternately suppressed, a phenomenon called binocular rivalry [1,2]. When they integrate, observers see some form of the combined stimuli. Many different properties (for ex le, shape or color) of the two stimuli can induce binocular rivalry. Not all differences result in rivalry, however. Visual 'beats', for ex le, are the result of integration of high-frequency flicker between the two eyes [3,4], and are thus a binocular fusion phenomenon. It remains in dispute whether binocular fusion and rivalry can co-exist with one another [5-7]. Here, we report that rivalry and beats, two apparently opposing phenomena, can be perceived at the same time within the same spatial location. We hypothesized that the interocular difference in visual attributes that are predominantly processed in the Parvocellular pathway will lead to rivalry, and differences in visual attributes that are predominantly processed in the Magnocellular pathway tend to integrate. Further predictions based on this hypothesis were tested and confirmed.
Publisher: MIT Press - Journals
Date: 2014
DOI: 10.1162/JOCN_A_00476
Abstract: How does the brain translate an internal representation of an object into a decision about the object's category? Recent studies have uncovered the structure of object representations in inferior temporal cortex (IT) using multivariate pattern analysis methods. These studies have shown that representations of in idual object exemplars in IT occupy distinct locations in a high-dimensional activation space, with object exemplar representations clustering into distinguishable regions based on category (e.g., animate vs. inanimate objects). In this study, we hypothesized that a representational boundary between category representations in this activation space also constitutes a decision boundary for categorization. We show that behavioral RTs for categorizing objects are well described by our activation space hypothesis. Interpreted in terms of classical and contemporary models of decision-making, our results suggest that the process of settling on an internal representation of a stimulus is itself partially constitutive of decision-making for object categorization.
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 12-12-2006
DOI: 10.1167/6.12.6
Publisher: Society for Neuroscience
Date: 21-12-2016
DOI: 10.1523/JNEUROSCI.2690-16.2016
Abstract: Multivariate pattern analysis is a powerful technique however, a significant theoretical limitation in neuroscience is the ambiguity in interpreting the source of decodable information used by classifiers. This is exemplified by the continued controversy over the source of orientation decoding from fMRI responses in human V1. Recently Carlson (2014) identified a potential source of decodable information by modeling voxel responses based on the Hubel and Wiesel (1972) ice-cube model of visual cortex. The model revealed that activity associated with the edges of gratings covaries with orientation and could potentially be used to discriminate orientation. Here we empirically evaluate whether “edge-related activity” underlies orientation decoding from patterns of BOLD response in human V1. First, we systematically mapped classifier performance as a function of stimulus location using population receptive field modeling to isolate each voxel's overlap with a large annular grating stimulus. Orientation was decodable across the stimulus however, peak decoding performance occurred for voxels with receptive fields closer to the fovea and overlapping with the inner edge. Critically, we did not observe the expected second peak in decoding performance at the outer stimulus edge as predicted by the edge account. Second, we evaluated whether voxels that contribute most to classifier performance have receptive fields that cluster in cortical regions corresponding to the retinotopic location of the stimulus edge. Instead, we find the distribution of highly weighted voxels to be approximately random, with a modest bias toward more foveal voxels. Our results demonstrate that edge-related activity is likely not necessary for orientation decoding. SIGNIFICANCE STATEMENT A significant theoretical limitation of multivariate pattern analysis in neuroscience is the ambiguity in interpreting the source of decodable information used by classifiers. For ex le, orientation can be decoded from BOLD activation patterns in human V1, even though orientation columns are at a finer spatial scale than 3T fMRI. Consequently, the source of decodable information remains controversial. Here we test the proposal that information related to the stimulus edges underlies orientation decoding. We map voxel population receptive fields in V1 and evaluate orientation decoding performance as a function of stimulus location in retinotopic cortex. We find orientation is decodable from voxels whose receptive fields do not overlap with the stimulus edges, suggesting edge-related activity does not substantially drive orientation decoding.
Publisher: Elsevier BV
Date: 11-2009
DOI: 10.1016/J.NEUROPSYCHOLOGIA.2009.05.014
Abstract: Conflict between sensory modalities can be resolved by one modality overwriting another. For ex le, movement of a limb that is visible in a stationary visual afterimage results in selective fading of that limb in the afterimage. We investigated the interaction of these two sensory modalities by inducing a mismatch between visual and proprioceptive hand location. Whereas this discrepancy did not affect the initial appearance of the hand in the afterimage, it did prevent subsequent motion with the hand from affecting the hand's appearance. Location mismatch disconnected the visual and proprioceptive experiences of the hand, "protecting" the visual afterimage from interaction with proprioception. Investigation of subjective higher order bodily experiences showed a strong negative correlation between afterimage disruption and the subjective feeling of ownership, suggesting that the brain can resolve multimodal location mismatch by 'disowning' a visible limb, and that the interaction between proprioception and vision is mediated by higher order bodily experiences.
Publisher: Society for Neuroscience
Date: 30-01-2020
DOI: 10.1523/JNEUROSCI.1399-19.2020
Abstract: In tonal music, continuous acoustic waveforms are mapped onto discrete, hierarchically arranged, internal representations of pitch. To examine the neural dynamics underlying this transformation, we presented male and female human listeners with tones embedded within a Western tonal context while recording their cortical activity using magnetoencephalography. Machine learning classifiers were then trained to decode different tones from their underlying neural activation patterns at each peristimulus time s le, providing a dynamic measure of their dissimilarity in cortex. Comparing the time-varying dissimilarity between tones with the predictions of acoustic and perceptual models, we observed a temporal evolution in the brain's representational structure. Whereas initial dissimilarities mirrored their fundamental-frequency separation, dissimilarities beyond 200 ms reflected the perceptual status of each tone within the tonal hierarchy of Western music. These effects occurred regardless of stimulus regularities within the context or whether listeners were engaged in a task requiring explicit pitch analysis. Lastly, patterns of cortical activity that discriminated between tones became increasingly stable in time as the information coded by those patterns transitioned from low-to-high level properties. Current results reveal the dynamics with which the complex perceptual structure of Western tonal music emerges in cortex at the timescale of an in idual tone. SIGNIFICANCE STATEMENT Little is understood about how the brain transforms an acoustic waveform into the complex perceptual structure of musical pitch. Applying neural decoding techniques to the cortical activity of human subjects engaged in music listening, we measured the dynamics of information processing in the brain on a moment-to-moment basis as subjects heard each tone. In the first 200 ms after onset, transient patterns of neural activity coded the fundamental frequency of tones. Subsequently, a period emerged during which more temporally stable activation patterns coded the perceptual status of each tone within the “tonal hierarchy” of Western music. Our results provide a crucial link between the complex perceptual structure of tonal music and the underlying neural dynamics from which it emerges.
Publisher: Elsevier BV
Date: 10-2019
DOI: 10.1016/J.NEUROIMAGE.2019.06.062
Abstract: Colour is a defining feature of many objects, playing a crucial role in our ability to rapidly recognise things in the world around us and make categorical distinctions. For ex le, colour is a useful cue when distinguishing lemons from limes or blackberries from raspberries. That means our representation of many objects includes key colour-related information. The question addressed here is whether the neural representation activated by knowing that something is red is the same as that activated when we actually see something red, particularly in regard to timing. We addressed this question using neural timeseries (magnetoencephalography, MEG) data to contrast real colour perception and implied object colour activation. We applied multivariate pattern analysis (MVPA) to analyse the brain activation patterns evoked by colour accessed via real colour perception and implied colour activation. Applying MVPA to MEG data allows us here to focus on the temporal dynamics of these processes. Male and female human participants (N = 18) viewed isoluminant red and green shapes and grey-scale, luminance-matched pictures of fruits and vegetables that are red (e.g., tomato) or green (e.g., kiwifruit) in nature. We show that the brain activation pattern evoked by real colour perception is similar to implied colour activation, but that this pattern is instantiated at a later time. These results suggest that a common colour representation can be triggered by activating object representations from memory and perceiving colours.
Publisher: Frontiers Media SA
Date: 17-04-2014
Publisher: Center for Open Science
Date: 17-08-2022
Abstract: Humans have little difficulty recognising visual objects in many circumstances, despite the very different retinal images that result from different viewpoints. One source of variability is 2-D rotation, where an object seen from different perspectives results in different orientations. Here, we studied how the brain transforms rotated object images into object representations that are tolerant to rotation. We measured time-varying electroencephalography responses to objects in eight orientations, presented at either 5 Hz or 20 Hz. We used multivariate classification to assess at what point in time rotation-tolerant object information emerged, and whether we could disrupt the rotation-tolerant object processing by presenting stimuli rapidly (20 Hz) to limit the depth of processing. We compared this to fixed-rotation measures of object decoding, where the classifier is trained and tested on the same orientation. Our results showed that both fixed-rotation and rotation-tolerant object decoding emerged at an early stage of processing, less than 100 ms after stimulus onset. However, rotation-tolerant information peaked later than fixed-rotation information, suggesting rotation-tolerant object representations are most robust during a late stage of processing, around 200 ms after stimulus onset. Both fixed-rotation and rotation-tolerant object information was lower for the 20 Hz compared to 5 Hz presentation rate, which suggests that object information processing is disrupted, but not eliminated, for fast presentation rates. Our results show that object information arises at similar times in the brain regardless of whether it is investigated with the fixed-rotation or rotation-tolerant object decoding method, but it is the later stage of processing that reconciles different viewpoints into a single rotation-tolerant representation.
Publisher: Cold Spring Harbor Laboratory
Date: 09-01-2019
DOI: 10.1101/515619
Abstract: Rapid image presentations combined with time-resolved multivariate analysis methods of EEG or MEG (rapid-MVPA) offer unique potential in assessing the temporal limitations of the human visual system. Recent work has shown that multiple visual objects presented sequentially can be simultaneously decoded from M/EEG recordings. Interestingly, object representations reached higher stages of processing for slower image presentation rates compared to fast rates. This fast rate attenuation is probably caused by forward and backward masking from the other images in the stream. Two factors that are likely to influence masking during rapid streams are stimulus duration and stimulus onset asynchrony (SOA). Here, we disentangle these effects by studying the emerging neural representation of visual objects using rapid-MVPA while independently manipulating stimulus duration and SOA. Our results show that longer SOAs enhance the decodability of neural representations, regardless of stimulus presentation duration, suggesting that subsequent images act as effective backward masks. In contrast, image duration does not appear to have a graded influence on object representations. Interestingly, however, decodability was improved when there was a gap between subsequent images, indicating that an abrupt onset or offset of an image enhances its representation. Our study yields insight into the dynamics of object processing in rapid streams, paving the way for future work using this promising approach.
Publisher: MIT Press
Date: 12-2017
DOI: 10.1162/JOCN_A_01177
Abstract: Animacy is a robust organizing principle among object category representations in the human brain. Using multivariate pattern analysis methods, it has been shown that distance to the decision boundary of a classifier trained to discriminate neural activation patterns for animate and inanimate objects correlates with observer RTs for the same animacy categorization task [Ritchie, J. B., Tovar, D. A., & Carlson, T. A. Emerging object representations in the visual system predict reaction times for categorization. PLoS Computational Biology, 11, e1004316, 2015 Carlson, T. A., Ritchie, J. B., Kriegeskorte, N., Durvasula, S., & Ma, J. Reaction time for object categorization is predicted by representational distance. Journal of Cognitive Neuroscience, 26, 132–142, 2014]. Using MEG decoding, we tested if the same relationship holds when a stimulus manipulation (degradation) increases task difficulty, which we predicted would systematically decrease the distance of activation patterns from the decision boundary and increase RTs. In addition, we tested whether distance to the classifier boundary correlates with drift rates in the linear ballistic accumulator [Brown, S. D., & Heathcote, A. The simplest complete model of choice response time: Linear ballistic accumulation. Cognitive Psychology, 57, 153–178, 2008]. We found that distance to the classifier boundary correlated with RT, accuracy, and drift rates in an animacy categorization task. Split by animacy, the correlations between brain and behavior were sustained longer over the time course for animate than for inanimate stimuli. Interestingly, when examining the distance to the classifier boundary during the peak correlation between brain and behavior, we found that only degraded versions of animate, but not inanimate, objects had systematically shifted toward the classifier decision boundary as predicted. Our results support an asymmetry in the representation of animate and inanimate object categories in the human brain.
Publisher: Pion Ltd
Date: 04-2015
DOI: 10.1068/I0723SAS
Publisher: Springer Science and Business Media LLC
Date: 16-01-2018
DOI: 10.1038/S41598-018-19222-3
Abstract: In music, the perception of pitch is governed largely by its tonal function given the preceding harmonic structure of the music. While behavioral research has advanced our understanding of the perceptual representation of musical pitch, relatively little is known about its representational structure in the brain. Using Magnetoencephalography (MEG), we recorded evoked neural responses to different tones presented within a tonal context. Multivariate Pattern Analysis (MVPA) was applied to “decode” the stimulus that listeners heard based on the underlying neural activity. We then characterized the structure of the brain’s representation using decoding accuracy as a proxy for representational distance, and compared this structure to several well established perceptual and acoustic models. The observed neural representation was best accounted for by a model based on the Standard Tonal Hierarchy , whereby differences in the neural encoding of musical pitches correspond to their differences in perceived stability. By confirming that perceptual differences honor those in the underlying neuronal population coding, our results provide a crucial link in understanding the cognitive foundations of musical pitch across psychological and neural domains.
Publisher: Oxford University Press (OUP)
Date: 23-06-2014
DOI: 10.1093/SCAN/NSU089
Publisher: Cold Spring Harbor Laboratory
Date: 09-04-2020
DOI: 10.1101/2020.04.08.032888
Abstract: Classic models of predictive coding propose that sensory systems use information retained from prior experience to predict current sensory input. Any mismatch between predicted and current input (prediction error) is then fed forward up the hierarchy leading to a revision of the prediction. We tested this hypothesis in the domain of object vision using a combination of multivariate pattern analysis and time-resolved electroencephalography. We presented participants with sequences of images that stepped around fixation in a predictable order. On the majority of presentations, the images conformed to a consistent pattern of position order and object category order, however, on a subset of presentations the last image in the sequence violated the established pattern by either violating the predicted category or position of the object. Contrary to classic predictive coding when decoding position and category we found no differences in decoding accuracy between predictable and violation conditions. However, consistent with recent extensions of predictive coding, exploratory analyses showed that a greater proportion of predictions was made to the forthcoming position in the sequence than to either the previous position or the position behind the previous position suggesting that the visual system actively anticipates future input as opposed to just inferring current input.
Publisher: Cold Spring Harbor Laboratory
Date: 16-07-2018
DOI: 10.1101/369926
Abstract: Colour is a defining feature of many objects, playing a crucial role in our ability to rapidly recognise things in the world around us and make categorical distinctions. For ex le, colour is a useful cue when distinguishing lemons from limes or blackberries from raspberries. That means our representation of many objects includes key colour-related information. The question addressed here is whether the neural representation activated by knowing that something is red is the same as that activated when we actually see something red, particularly in regard to timing. We addressed this question using neural timeseries (magnetoencephalography, MEG) data to contrast real colour perception and implied object colour activation. We applied multivariate pattern analysis (MVPA) to analyse the brain activation patterns evoked by colour accessed via real colour perception and implied colour activation. Applying MVPA to MEG data allows us here to focus on the temporal dynamics of these processes. Male and female human participants (N=18) viewed isoluminant red and green shapes and grey-scale, luminance-matched pictures of fruits and vegetables that are red (e.g., tomato) or green (e.g., kiwifruit) in nature. We show that the brain activation pattern evoked by real colour perception is similar to implied colour activation, but that this pattern is instantiated at a later time. These results suggest that a common colour representation can be triggered by activating object representations from memory and perceiving colours.
Publisher: MIT Press - Journals
Date: 10-2014
DOI: 10.1162/JOCN_A_00644
Abstract: Objects occupy space. How does the brain represent the spatial location of objects? Retinotopic early visual cortex has precise location information but can only segment simple objects. On the other hand, higher visual areas can resolve complex objects but only have coarse location information. Thus coarse location of complex objects might be represented by either (a) feedback from higher areas to early retinotopic areas or (b) coarse position encoding in higher areas. We tested these alternatives by presenting various kinds of first- (edge-defined) and second-order (texture) objects. We applied multivariate classifiers to the pattern of EEG litudes across the scalp at a range of time points to trace the temporal dynamics of coarse location representation. For edge-defined objects, peak classification performance was high and early and thus attributable to the retinotopic layout of early visual cortex. For texture objects, it was low and late. Crucially, despite these differences in peak performance and timing, training a classifier on one object and testing it on others revealed that the topography at peak performance was the same for both first- and second-order objects. That is, the same location information, encoded by early visual areas, was available for both edge-defined and texture objects at different time points. These results indicate that locations of complex objects such as textures, although not represented in the bottom–up sweep, are encoded later by neural patterns resembling the bottom–up ones. We conclude that feedback mechanisms play an important role in coarse location representation of complex objects.
Publisher: Cold Spring Harbor Laboratory
Date: 17-05-2019
DOI: 10.1101/637603
Abstract: Mental imagery is the ability to generate images in the mind in the absence of sensory input. Both perceptual visual processing and internally generated imagery engage large, overlapping networks of brain regions. However, it is unclear whether they are characterized by similar temporal dynamics. Recent magnetoencephalography work has shown that object category information was decodable from brain activity during mental imagery, but the timing was delayed relative to perception. The current study builds on these findings, using electroencephalography to investigate the dynamics of mental imagery. Sixteen participants viewed two images of the Sydney Harbour Bridge and two images of Santa Claus. On each trial, they viewed a sequence of the four images and were asked to imagine one of them, which was cued retroactively by its temporal location in the sequence. Time-resolved multivariate pattern analysis was used to decode the viewed and imagined stimuli. Our results indicate that the dynamics of imagery processes are more variable across, and within, participants compared to perception of physical stimuli. Although category and exemplar information was decodable for viewed stimuli, there were no informative patterns of activity during mental imagery. The current findings suggest stimulus complexity, task design and in idual differences may influence the ability to successfully decode imagined images. We discuss the implications of these results for our understanding of the neural processes underlying mental imagery.
Publisher: Springer Science and Business Media LLC
Date: 10-01-2022
DOI: 10.1038/S41597-021-01102-7
Abstract: The neural basis of object recognition and semantic knowledge has been extensively studied but the high dimensionality of object space makes it challenging to develop overarching theories on how the brain organises object knowledge. To help understand how the brain allows us to recognise, categorise, and represent objects and object categories, there is a growing interest in using large-scale image databases for neuroimaging experiments. In the current paper, we present THINGS-EEG, a dataset containing human electroencephalography responses from 50 subjects to 1,854 object concepts and 22,248 images in the THINGS stimulus set, a manually curated and high-quality image database that was specifically designed for studying human vision. The THINGS-EEG dataset provides neuroimaging recordings to a systematic collection of objects and concepts and can therefore support a wide array of research to understand visual object processing in the human brain.
Publisher: Elsevier BV
Date: 05-2016
DOI: 10.1016/J.NEUROIMAGE.2016.02.019
Abstract: Perceptual similarity is a cognitive judgment that represents the end-stage of a complex cascade of hierarchical processing throughout visual cortex. Previous studies have shown a correspondence between the similarity of coarse-scale fMRI activation patterns and the perceived similarity of visual stimuli, suggesting that visual objects that appear similar also share similar underlying patterns of neural activation. Here we explore the temporal relationship between the human brain's time-varying representation of visual patterns and behavioral judgments of perceptual similarity. The visual stimuli were abstract patterns constructed from identical perceptual units (oriented Gabor patches) so that each pattern had a unique global form or perceptual 'Gestalt'. The visual stimuli were decodable from evoked neural activation patterns measured with magnetoencephalography (MEG), however, stimuli differed in the similarity of their neural representation as estimated by differences in decodability. Early after stimulus onset (from 50ms), a model based on retinotopic organization predicted the representational similarity of the visual stimuli. Following the peak correlation between the retinotopic model and neural data at 80ms, the neural representations quickly evolved so that retinotopy no longer provided a sufficient account of the brain's time-varying representation of the stimuli. Overall the strongest predictor of the brain's representation was a model based on human judgments of perceptual similarity, which reached the limits of the maximum correlation with the neural data defined by the 'noise ceiling'. Our results show that large-scale brain activation patterns contain a neural signature for the perceptual Gestalt of composite visual features, and demonstrate a strong correspondence between perception and complex patterns of brain activity.
Publisher: Cold Spring Harbor Laboratory
Date: 17-08-2018
DOI: 10.1101/394148
Abstract: In our daily lives, we are bombarded with a stream of rapidly changing visual input. Humans have the remarkable capacity to detect and identify objects in fast-changing scenes. Yet, when studying brain representations, stimuli are generally presented in isolation. Here, we studied the dynamics of human vision using a combination of fast stimulus presentation rates, electroencephalography and multivariate decoding analyses. Using a presentation rate of 5 images per second, we obtained the representational structure of a large number of stimuli, and showed the emerging abstract categorical organisation of this structure. Furthermore, we could separate the temporal dynamics of perceptual processing from higher-level target selection effects. In a second experiment, we used the same paradigm at 20Hz to show that shorter image presentation limits the categorical abstraction of object representations. Our results show that applying multivariate pattern analysis to every image in rapid serial visual processing streams has unprecedented potential for studying the temporal dynamics of the structure of representations in the human visual system.
Publisher: MIT Press
Date: 07-2018
DOI: 10.1162/JOCN_A_01257
Abstract: Numerical format describes the way magnitude is conveyed, for ex le, as a digit (“3”) or Roman numeral (“III”). In the field of numerical cognition, there is an ongoing debate of whether magnitude representation is independent of numerical format. Here, we examine the time course of magnitude processing when using different symbolic formats. We presented participants with a series of digits and dice patterns corresponding to the magnitudes of 1 to 6 while performing a 1-back task on magnitude. Magnetoencephalography offers an opportunity to record brain activity with high temporal resolution. Multivariate pattern analysis applied to magnetoencephalographic data allows us to draw conclusions about brain activation patterns underlying information processing over time. The results show that we can cross-decode magnitude when training the classifier on magnitude presented in one symbolic format and testing the classifier on the other symbolic format. This suggests a similar representation of these numerical symbols. In addition, results from a time generalization analysis show that digits were accessed slightly earlier than dice, demonstrating temporal asynchronies in their shared representation of magnitude. Together, our methods allow a distinction between format-specific signals and format-independent representations of magnitude showing evidence that there is a shared representation of magnitude accessed via different symbols.
Publisher: Elsevier BV
Date: 03-2016
DOI: 10.1016/J.NEUROIMAGE.2016.01.006
Abstract: Object perception involves a range of visual and cognitive processes, and is known to include both a feedfoward flow of information from early visual cortical areas to higher cortical areas, along with feedback from areas such as prefrontal cortex. Previous studies have found that low and high spatial frequency information regarding object identity may be processed over different timescales. Here we used the high temporal resolution of magnetoencephalography (MEG) combined with multivariate pattern analysis to measure information specifically related to object identity in peri-frontal and peri-occipital areas. Using stimuli closely matched in their low-level visual content, we found that activity in peri-occipital cortex could be used to decode object identity from ~80ms post stimulus onset, and activity in peri-frontal cortex could also be used to decode object identity from a later time (~265ms post stimulus onset). Low spatial frequency information related to object identity was present in the MEG signal at an earlier time than high spatial frequency information for peri-occipital cortex, but not for peri-frontal cortex. We additionally used Granger causality analysis to compare feedforward and feedback influences on representational content, and found evidence of both an early feedfoward flow and later feedback flow of information related to object identity. We discuss our findings in relation to existing theories of object processing and propose how the methods we use here could be used to address further questions of the neural substrates underlying object perception.
Publisher: Proceedings of the National Academy of Sciences
Date: 04-12-2007
Abstract: Increasing evidence suggests that attention can concurrently select multiple locations yet it is not clear whether this ability relies on continuous allocation of attention to the different targets (a “parallel” strategy) or whether attention switches rapidly between the targets (a periodic “s ling” strategy). Here, we propose a method to distinguish between these two alternatives. The human psychometric function for detection of a single target as a function of its duration can be used to predict the corresponding function for two or more attended targets. Importantly, the predicted curves differ, depending on whether a parallel or s ling strategy is assumed. For a challenging detection task, we found that human performance was best reflected by a s ling model, indicating that multiple items of interest were processed in series at a rate of approximately seven items per second. Surprisingly, the data suggested that attention operated in this periodic regime, even when it was focused on a single target. That is, attention might rely on an intrinsically periodic process.
Publisher: The Neurons Behavior Data Analysis and Theory collective
Date: 17-02-2021
DOI: 10.51628/001C.21174
Abstract: The human brain prioritises relevant sensory information to perform different tasks. Enhancement of task-relevant information requires flexible allocation of attentional resources, but it is still a mystery how this is operationalised in the brain. We investigated how attentional mechanisms operate in situations where multiple stimuli are presented in the same location and at the same time. In two experiments, participants performed a challenging two-back task on different types of visual stimuli that were presented simultaneously and superimposed over each other. Using electroencephalography and multivariate decoding, we analysed the effect of attention on the neural responses to each in idual stimulus. Whole brain neural responses contained considerable information about both the attended and unattended stimuli, even though they were presented simultaneously and represented in overlapping receptive fields. As expected, attention increased the decodability of stimulus-related information contained in the neural responses, but this effect was evident earlier for stimuli that were presented at smaller sizes. Our results show that early neural responses to stimuli in fast-changing displays contain remarkable information about the sensory environment but are also modulated by attention in a manner dependent on perceptual characteristics of the relevant stimuli. Stimuli, code, and data for this study can be found at osf.io/7zhwp/.
Publisher: Elsevier BV
Date: 10-2017
DOI: 10.1016/J.NEUROPSYCHOLOGIA.2017.02.013
Abstract: Visual object recognition is a complex, dynamic process. Multivariate pattern analysis methods, such as decoding, have begun to reveal how the brain processes complex visual information. Recently, temporal decoding methods for EEG and MEG have offered the potential to evaluate the temporal dynamics of object recognition. Here we review the contribution of M/EEG time-series decoding methods to understanding visual object recognition in the human brain. Consistent with the current understanding of the visual processing hierarchy, low-level visual features dominate decodable object representations early in the time-course, with more abstract representations related to object category emerging later. A key finding is that the time-course of object processing is highly dynamic and rapidly evolving, with limited temporal generalisation of decodable information. Several studies have examined the emergence of object category structure, and we consider to what degree category decoding can be explained by sensitivity to low-level visual features. Finally, we evaluate recent work attempting to link human behaviour to the neural time-course of object processing.
Publisher: Springer Science and Business Media LLC
Date: 11-2010
DOI: 10.3758/BF03196683
Abstract: Visual attention can be ided over multiple objects or locations. However, there is no single theoretical framework within which the effects of iding attention can be interpreted. In order to develop such a model, here we manipulated the stage of visual processing at which attention was ided, while simultaneously probing the costs of iding attention on two dimensions. We show that iding attention incurs dissociable time and precision costs, which depend on whether attention is ided during monitoring or during access. Dividing attention during monitoring resulted in progressively delayed access to attended locations as additional locations were monitored, as well as a one-off precision cost. When iding attention during access, time costs were systematically lower at one of the accessed locations than at the other, indicating that ided attention during access, in fact, involves rapid sequential allocation of un ided attention. We propose a model in which ided attention is understood as the simultaneous parallel preparation and subsequent sequential execution of multiple shifts of un ided attention. This interpretation has the potential to bring together erse findings from both the ided-attention and saccade preparation literature and provides a framework within which to integrate the broad spectrum of ided-attention methodologies.
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 15-09-2011
DOI: 10.1167/11.10.9
Abstract: We effortlessly and seemingly instantaneously recognize thousands of objects, although we rarely--if ever--see the same image of an object twice. The retinal image of an object can vary by context, size, viewpoint, illumination, and location. The present study examined how the visual system abstracts object category across variations in retinal location. In three experiments, participants viewed images of objects presented to different retinal locations while brain activity was recorded using magnetoencephalography (MEG). A pattern classifier was trained to recover the stimulus position (Experiments 1, 2, and 3) and category (Experiment 3) from the recordings. Using this decoding approach, we show that an object's location in the visual field can be recovered in high temporal resolution (5 ms) and with sufficient fidelity to capture topographic organization in visual areas. Experiment 3 showed that an object's category could be recovered from the recordings as early as 135 ms after the onset of the stimulus and that category decoding generalized across retinal location (i.e., position invariance). Our experiments thus show that the visual system rapidly constructs a category representation for objects that is invariant to position.
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 21-11-2007
DOI: 10.1167/7.14.2
Publisher: Elsevier BV
Date: 2011
DOI: 10.1016/J.CORTEX.2009.08.015
Abstract: The present study examined the coding of spatial position in object selective cortex. Using functional magnetic resonance imaging (fMRI) and pattern classification analysis, we find that three areas in object selective cortex, the lateral occipital cortex area (LO), the fusiform face area (FFA), and the parahippoc al place area (PPA), robustly code the spatial position of objects. The analysis further revealed several anisotropies (e.g., horizontal/vertical asymmetry) in the representation of visual space in these areas. Finally, we show that the representation of information in these areas permits object category information to be extracted across varying locations in the visual field a finding that suggests a potential neural solution to accomplishing translation invariance.
Publisher: Elsevier BV
Date: 10-2018
DOI: 10.1016/J.NEUROIMAGE.2018.06.022
Abstract: Multivariate decoding methods applied to neuroimaging data have become the standard in cognitive neuroscience for unravelling statistical dependencies between brain activation patterns and experimental conditions. The current challenge is to demonstrate that decodable information is in fact used by the brain itself to guide behaviour. Here we demonstrate a promising approach to do so in the context of neural activation during object perception and categorisation behaviour. We first localised decodable information about visual objects in the human brain using a multivariate decoding analysis and a spatially-unbiased searchlight approach. We then related brain activation patterns to behaviour by testing whether the classifier used for decoding can be used to predict behaviour. We show that while there is decodable information about visual category throughout the visual brain, only a subset of those representations predicted categorisation behaviour, which were strongest in anterior ventral temporal cortex. Our results have important implications for the interpretation of neuroimaging studies, highlight the importance of relating decoding results to behaviour, and suggest a suitable methodology towards this aim.
Publisher: Elsevier BV
Date: 06-2006
DOI: 10.1016/J.MRI.2005.12.005
Abstract: The connectivity between functionally distinct areas in the human brain is unknown because of the limitations posed by current postmortem anatomical labeling techniques. Diffusion tensor imaging (DTI) has previously been used to define large white matter tracts based on well-known anatomical landmarks in the living human brain. In the present study, we used DTI coupled with functional magnetic resonance imaging (fMRI) to assess neuronal connections between human striate and functionally defined extrastriate ventral cortical areas. Functional areas were identified with conventional fMRI mapping procedures and then used as seeding points in a DTI analysis to ascertain connectivity patterns between cortical areas, thus yielding the pattern of connections between human occipitoventral visual areas in vivo.
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 08-08-2011
DOI: 10.1167/11.9.2
Publisher: Cold Spring Harbor Laboratory
Date: 07-06-2023
DOI: 10.1101/2023.06.06.543985
Abstract: Humans make decisions about food every day. The visual system provides important information that forms a basis for these food decisions. Although previous research has focussed on visual object and category representations in the brain, it is still unclear how visually presented food is encoded by the brain. Here, we investigate the time-course of food representations in the brain. We used time-resolved multivariate analyses of electroencephalography (EEG) data, obtained from human participants (both sexes), to determine which food features are represented in the brain, and whether focused attention is needed for this. We recorded EEG while participants engaged in one of two tasks. In one task the stimuli were task relevant, whereas in the other task the stimuli were not task relevant. Our findings indicate that the brain can differentiate between food and non-food items from approximately 84 milliseconds after stimulus onset. The neural signal also contained information about food naturalness, the level of transformation, as well as the perceived caloric content. This information was present regardless of whether the food items were task relevant or not. Information about the perceived immediate edibility of the food, however, was only present when the food was task relevant. Furthermore, the recorded brain activity correlated with the behavioural responses in an odd-item-out task. Together, our results contribute to our understanding of how the human brain processes visually presented food.
Publisher: Cold Spring Harbor Laboratory
Date: 03-09-2022
DOI: 10.1101/2022.09.02.506121
Abstract: Mental imagery is a process by which thoughts become experienced with sensory characteristics. Yet, it is not clear why mental images appear diminished compared to veridical images, nor how mental images are phenomenologically distinct from hallucinations, another type of non-veridical sensory experience. Current evidence suggests that imagination and veridical perception share neural resources. If so, we argue that considering how neural representations of externally-generated stimuli (i.e. sensory input) and internally-generated stimuli (i.e. thoughts) might interfere with one another can sufficiently differentiate veridical, imaginary, and hallucinatory perception. We here use a simple computational model of a serially-connected, hierarchical network with bidirectional information flow to emulate the primate visual system. We show that modelling even first-approximations of neural competition can more coherently explain imagery phenomenology than non-competitive models. Our simulations predict that, without competing sensory input, imagined stimuli should ubiquitously dominate hierarchical representations. However, with competition, imagination should dominate high-level representations but largely fail to outcompete sensory inputs at lower processing levels. To interpret our findings, we assume low-level stimulus information (e.g. in early visual cortex) contributes most to the sensory aspects of perceptual experience, while high-level stimulus information (e.g. towards temporal regions) contributes most to its abstract aspects. Our findings therefore suggest that ongoing bottom-up inputs during waking life may prevent imagination from overriding veridical sensory experience. In contrast, internally-generated stimuli may be hallucinated when sensory input is d ened or eradicated. Our approach can explain in idual differences in imagery, along with aspects of daydreaming, hallucinations, and non-visual mental imagery.
Publisher: Center for Open Science
Date: 14-12-2020
Abstract: Can we trust our eyes? Until recently, we rarely had to question whether what we see is indeed what exists, but this is changing. Artificial neural networks can now generate realistic images that challenge our perception of what is real. This new reality can have significant implications for cybersecurity, counterfeiting, fake news, and border security. We investigated how the human brain encodes and interprets realistic artificially generated images using behaviour and brain imaging. While at a group level people performed near chance classifying real and realistic fakes, participants tended to interchange the labels, classifying real faces as realistic fakes and vice versa. Understanding this difference between brain and behavioural responses may be key in determining the 'real' in our new reality.
Publisher: Cold Spring Harbor Laboratory
Date: 21-07-2017
DOI: 10.1101/166660
Abstract: Animacy is a robust organizing principle amongst object category representations in the human brain. Using multivariate pattern analysis methods (MVPA), it has been shown that distance to the decision boundary of a classifier trained to discriminate neural activation patterns for animate and inanimate objects correlates with observer reaction times for the same animacy categorization task (Carlson, Ritchie, Kriegeskorte, Durvasula, & Ma, 2014 Ritchie, Tovar, & Carlson, 2015). Using MEG decoding, we tested if the same relationship holds when a stimulus manipulation (degradation) increases task difficulty, which we predicted would systematically decrease the distance of activation patterns from the decision boundary, and increase reaction times. In addition, we tested whether distance to the classifier boundary correlates with drift rates in the Linear Ballistic Accumulator (Brown & Heathcote, 2008). We found that distance to the classifier boundary correlated with reaction time, accuracy, and drift rates in an animacy categorization task. Split by animacy, the correlations between brain and behavior were sustained for longer over the time course for animate than for inanimate stimuli. Interestingly, when examining the distance to the classifier boundary during the peak correlation between brain and behavior, we found that only degraded versions of animate, but not inanimate, objects had systematically shifted towards the classifier decision boundary as predicted. Our results support an asymmetry in the representation of animate and inanimate object categories in the human brain.
Publisher: Elsevier BV
Date: 04-2015
DOI: 10.1016/J.NEUROIMAGE.2015.02.009
Abstract: Multivariate pattern analysis (MVPA) has become an increasingly popular approach to fMRI research because these methods offer the attractive possibility of "decoding" the content of brain representations. One weakness of MVPA is that the source of decodable information is not always apparent, as evidenced by the ongoing debate about orientation decoding in human visual cortex. In a recent study (Carlson, 2014), we used an unbiased model of visual cortex to reveal a new source of decodable information that may account for orientation decoding. Clifford and Mannion (2015) take issue with the model's capacity to decode spiral sense. Here, we discuss their findings in the context of the ongoing debate on orientation decoding and further highlight the limitations of using MVPA to infer the content of brain representations.
Publisher: MIT Press - Journals
Date: 2014
DOI: 10.1162/JOCN_A_00458
Abstract: In the ventral visual pathway, early visual areas encode light patterns on the retina in terms of image properties, for ex le, edges and color, whereas higher areas encode visual information in terms of objects and categories. At what point does semantic knowledge, as instantiated in human language, emerge? We examined this question by studying whether semantic similarity in language relates to the brain's organization of object representations in inferior temporal cortex (ITC), an area of the brain at the crux of several proposals describing how the brain might represent conceptual knowledge. Semantic relationships among words can be viewed as a geometrical structure with some pairs of words close in their meaning (e.g., man and boy) and other pairs more distant (e.g., man and tomato). ITC's representation of objects similarly can be viewed as a complex structure with some pairs of stimuli evoking similar patterns of activation (e.g., man and boy) and other pairs evoking very different patterns (e.g., man and tomato). In this study, we examined whether the geometry of visual object representations in ITC bears a correspondence to the geometry of semantic relationships between word labels used to describe the objects. We compared ITC's representation to semantic structure, evaluated by explicit ratings of semantic similarity and by five computational measures of semantic similarity. We show that the representational geometry of ITC—but not of earlier visual areas (V1)—is reflected both in explicit behavioral ratings of semantic similarity and also in measures of semantic similarity derived from word usage patterns in natural language. Our findings show that patterns of brain activity in ITC not only reflect the organization of visual information into objects but also represent objects in a format compatible with conceptual thought and language.
Publisher: MIT Press - Journals
Date: 04-2017
DOI: 10.1162/JOCN_A_01068
Abstract: Multivariate pattern analysis (MVPA) or brain decoding methods have become standard practice in analyzing fMRI data. Although decoding methods have been extensively applied in brain–computer interfaces, these methods have only recently been applied to time series neuroimaging data such as MEG and EEG to address experimental questions in cognitive neuroscience. In a tutorial style review, we describe a broad set of options to inform future time series decoding studies from a cognitive neuroscience perspective. Using ex le MEG data, we illustrate the effects that different options in the decoding analysis pipeline can have on experimental results where the aim is to “decode” different perceptual stimuli or cognitive states over time from dynamic brain activation patterns. We show that decisions made at both preprocessing (e.g., dimensionality reduction, subs ling, trial averaging) and decoding (e.g., classifier selection, cross-validation design) stages of the analysis can significantly affect the results. In addition to standard decoding, we describe extensions to MVPA for time-varying neuroimaging data including representational similarity analysis, temporal generalization, and the interpretation of classifier weight maps. Finally, we outline important caveats in the design and interpretation of time series decoding experiments.
Publisher: Cold Spring Harbor Laboratory
Date: 27-04-2023
DOI: 10.1101/2023.04.26.538486
Abstract: The basic computations performed in the human early visual cortex are the foundation for visual perception. While we know a lot about these computations from work in non-human animals, a key missing piece is how the coding of visual features relates to our perceptual experience. To investigate visual feature coding, interactions, and their relationship to human perception, we investigated neural responses and perceptual similarity judgements to a large set of visual stimuli that varied parametrically along four feature dimensions. We measured neural responses using electroencephalography (N=16) to 256 grating stimuli that varied in orientation, spatial frequency, contrast, and colour. We then mapped the response profiles of the neural coding of each visual feature and their interactions, and related these to independently obtained behavioural judgements of stimulus similarity. The results confirmed fundamental principles of feature coding in the visual system, such that all four features were processed simultaneously but differed in their dynamics, and there was distinctive conjunction coding for different combinations of features in the neural responses. Importantly, modelling of the behaviour revealed that every feature contributed to perceptual experience, despite the untargeted nature of the behavioural task. Further, the relationship between neural coding and behaviour was evident from initial processing stages, signifying that the fundamental features, not just their interactions, are crucial for perceptual experience. This study highlights the importance of understanding how feature coding progresses through the visual hierarchy and the relationship between different stages of processing and perception.
Publisher: Cold Spring Harbor Laboratory
Date: 25-05-2021
DOI: 10.1101/2021.05.24.445376
Abstract: Selective attention prioritises relevant information amongst competing sensory input. Time-resolved electrophysiological studies have shown stronger representation of attended compared to unattended stimuli, which has been interpreted as an effect of attention on information coding. However, because attention is often manipulated by making only the attended stimulus a target to be remembered and/or responded to, many reported attention effects have been confounded with target-related processes such as visual short-term memory or decision-making. In addition, the effects of attention could be influenced by temporal expectation. The aim of this study was to investigate the dynamic effect of attention on visual processing using multivariate pattern analysis of electroencephalography (EEG) data, while 1) controlling for target-related confounds, and 2) directly investigating the influence of temporal expectation. Participants viewed rapid sequences of overlaid oriented grating pairs at fixation while detecting a “target” grating of a particular orientation. We manipulated attention, one grating was attended and the other ignored, and temporal expectation, with stimulus onset timing either predictable or not. We controlled for target-related processing confounds by only analysing non-target trials. Both attended and ignored gratings were initially coded equally in the pattern of responses across EEG sensors. An effect of attention, with preferential coding of the attended stimulus, emerged approximately 230ms after stimulus onset. This attention effect occurred even when controlling for target-related processing confounds, and regardless of stimulus onset predictability. These results provide insight into the effect of attention on the dynamic processing of competing visual information, presented at the same time and location.
Publisher: Cold Spring Harbor Laboratory
Date: 04-06-2021
DOI: 10.1101/2021.06.03.447008
Abstract: The neural basis of object recognition and semantic knowledge has been extensively studied but the high dimensionality of object space makes it challenging to develop overarching theories on how the brain organises object knowledge. To help understand how the brain allows us to recognise, categorise, and represent objects and object categories, there is a growing interest in using large-scale image databases for neuroimaging experiments. In the current paper, we present THINGS-EEG, a dataset containing human electroencephalography responses from 50 subjects to 1,854 object concepts and 22,248 images in the THINGS stimulus set, a manually curated and high-quality image database that was specifically designed for studying human vision. The THINGS-EEG dataset provides neuroimaging recordings to a systematic collection of objects and concepts and can therefore support a wide array of research to understand visual object processing in the human brain.
Publisher: Cold Spring Harbor Laboratory
Date: 24-05-2019
DOI: 10.1101/648998
Abstract: Neuroimaging studies investigating human object recognition have largely focused on a relatively small number of object categories, in particular, faces, bodies, scenes, and vehicles. More recent studies have taken a broader focus, investigating hypothesised dichotomies, for ex le animate versus inanimate, and continuous feature dimensions, such as biologically similarity. These studies typically have used stimuli that are clearly identified as animate or inanimate, neglecting objects that may not fit into this dichotomy. We generated a novel stimulus set including standard objects and objects that blur the animate-inanimate dichotomy, for ex le robots and toy animals. We used MEG time-series decoding to study the brain’s emerging representation of these objects. Our analysis examined contemporary models of object coding such as dichotomous animacy, as well as several new higher order models that take into account an object’s capacity for agency (i.e. its ability to move voluntarily) and capacity to experience the world. We show that early brain responses are best accounted for by low-level visual similarity of the objects and shortly thereafter, higher order models of agency/experience best explained the brain’s representation of the stimuli. Strikingly, a model of human-similarity provided the best account for the brain’s representation after an initial perceptual processing phase. Our findings provide evidence for a new dimension of object coding in the human brain – one that has a “human-centric” focus.
Publisher: Elsevier BV
Date: 03-2019
DOI: 10.1016/J.NEUROIMAGE.2018.12.046
Abstract: In our daily lives, we are bombarded with a stream of rapidly changing visual input. Humans have the remarkable capacity to detect and identify objects in fast-changing scenes. Yet, when studying brain representations, stimuli are generally presented in isolation. Here, we studied the dynamics of human vision using a combination of fast stimulus presentation rates, electroencephalography and multivariate decoding analyses. Using a presentation rate of 5 images per second, we obtained the representational structure of a large number of stimuli, and showed the emerging abstract categorical organisation of this structure. Furthermore, we could separate the temporal dynamics of perceptual processing from higher-level target selection effects. In a second experiment, we used the same paradigm at 20Hz to show that shorter image presentation limits the categorical abstraction of object representations. Our results show that applying multivariate pattern analysis to every image in rapid serial visual processing streams has unprecedented potential for studying the temporal dynamics of the structure of representations in the human visual system.
Publisher: SAGE Publications
Date: 04-04-2013
Publisher: Elsevier BV
Date: 08-2019
DOI: 10.1016/J.NEUROIMAGE.2019.04.050
Abstract: Rapid image presentations combined with time-resolved multivariate analysis methods of EEG or MEG (rapid-MVPA) offer unique potential in assessing the temporal limitations of the human visual system. Recent work has shown that multiple visual objects presented sequentially can be simultaneously decoded from M/EEG recordings. Interestingly, object representations reached higher stages of processing for slower image presentation rates compared to fast rates. This fast rate attenuation is probably caused by forward and backward masking from the other images in the stream. Two factors that are likely to influence masking during rapid streams are stimulus duration and stimulus onset asynchrony (SOA). Here, we disentangle these effects by studying the emerging neural representation of visual objects using rapid-MVPA while independently manipulating stimulus duration and SOA. Our results show that longer SOAs enhance the decodability of neural representations, regardless of stimulus presentation duration, suggesting that subsequent images act as effective backward masks. In contrast, image duration does not appear to have a graded influence on object representations. Interestingly, however, decodability was improved when there was a gap between subsequent images, indicating that an abrupt onset or offset of an image enhances its representation. Our study yields insight into the dynamics of object processing in rapid streams, paving the way for future work using this promising approach.
Publisher: Cold Spring Harbor Laboratory
Date: 08-04-2019
DOI: 10.1101/603043
Abstract: The mere presence of information in the brain does not always mean that this information is available to consciousness (de-Wit, Alexander, Ekroll, & Wagemans, 2016). Experiments using paradigms such as binocular rivalry, visual masking, and the attentional blink have shown that visual information can be processed and represented by the visual system without reaching consciousness. Using multivariate pattern analysis (MVPA) and magneto-encephalography (MEG), we investigated the temporal dynamics of information processing for unconscious and conscious stimuli. We decoded stimulus information from the brain recordings while manipulating visual consciousness by presenting stimuli at threshold contrast in a backward masking paradigm. Participants’ consciousness was measured using both a forced-choice categorisation task and self-report. We show that brain activity during both conscious and non-conscious trials contained stimulus information, and that this information was enhanced in conscious trials. Overall, our results indicate that visual consciousness is characterised by enhanced neural activity representing the visual stimulus, and that this effect arises as early as 180 ms post-stimulus onset.
Publisher: MIT Press - Journals
Date: 2020
DOI: 10.1162/JOCN_A_01472
Abstract: Human listeners are bombarded by acoustic information that the brain rapidly organizes into coherent percepts of objects and events in the environment, which aids speech and music perception. The efficiency of auditory object recognition belies the critical constraint that acoustic stimuli necessarily require time to unfold. Using magnetoencephalography, we studied the time course of the neural processes that transform dynamic acoustic information into auditory object representations. Participants listened to a erse set of 36 tokens comprising everyday sounds from a typical human environment. Multivariate pattern analysis was used to decode the sound tokens from the magnetoencephalographic recordings. We show that sound tokens can be decoded from brain activity beginning 90 msec after stimulus onset with peak decoding performance occurring at 155 msec poststimulus onset. Decoding performance was primarily driven by differences between category representations (e.g., environmental vs. instrument sounds), although within-category decoding was better than chance. Representational similarity analysis revealed that these emerging neural representations were related to harmonic and spectrotemporal differences among the stimuli, which correspond to canonical acoustic features processed by the auditory pathway. Our findings begin to link the processing of physical sound properties with the perception of auditory objects and events in cortex.
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 14-04-2011
DOI: 10.1167/11.4.8
Abstract: Object-based attention facilitates the processing of features that form the object. Two hypotheses are conceivable for how object-based attention is deployed to an object's features: first, the object is attended by selecting its features alternatively, a configuration of features as such is attended by selecting the object representation they form. Only for the latter alternative, the perception of a feature configuration as entity ("objecthood") is a necessary condition for object-based attention. Disentangling the two alternatives requires the comparison of identical feature configurations that induce the perception of an object in one condition ("bound") and do not do so in another condition ("unbound"). We used an ambiguous stimulus, whose percept spontaneously switches between bound and unbound, while the stimulus itself remains unchanged. We tested discrimination on the boundary of the diamond as well as detection of probes inside and outside the diamond. We found discrimination performance to be increased if features were perceptually bound into an object. Furthermore, detection performance was higher within and lower outside the bound object as compared to the unbound configuration. Consequently, the facilitation of processing by object-based attention requires objecthood, that is, a unified internal representation of an "object"-not a mere collection of features.
Publisher: MIT Press - Journals
Date: 10-2009
Abstract: Interhemispheric competition between homologous areas in the human brain is believed to be involved in a wide variety of human behaviors from motor activity to visual perception and particularly attention. For ex le, patients with lesions in the posterior parietal cortex are unable to selectively track objects in the contralesional side of visual space when targets are simultaneously present in the ipsilesional visual field, a form of visual extinction. Visual extinction may arise due to an imbalance in the normal interhemispheric competition. To directly assess the issue of reciprocal inhibition, we used fMRI to localize those brain regions active during attention-based visual tracking and then applied low-frequency repetitive transcranial magnetic stimulation over identified areas in the left and right intraparietal sulcus to asses the behavioral effects on visual tracking. We induced a severe impairment in visual tracking that was selective for conditions of simultaneous tracking in both visual fields. Our data show that the parietal lobe is essential for visual tracking and that the two hemispheres compete for attentional resources during tracking. Our results provide a neuronal basis for visual extinction in patients with parietal lobe damage.
Publisher: Society for Neuroscience
Date: 11-06-2014
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 13-08-2010
DOI: 10.1167/10.7.1001
Publisher: Frontiers Media SA
Date: 28-04-2016
Publisher: Center for Open Science
Date: 12-11-2021
Abstract: The ability to perceive moving objects is crucial for threat identification and survival. Recent neuroimaging evidence has shown that goal-directed movement is an important element of object processing in the brain. However, prior work has primarily used moving stimuli that are also animate, making it difficult to disentangle the effect of movement from aliveness or animacy in representational categorisation. In the current study, we investigated the relationship between how the brain processes movement and aliveness by including stimuli that are alive but still (e.g., plants), and stimuli that are not alive but move (e.g., waves). We examined electroencephalographic (EEG) data recorded while participants viewed static images of moving or non-moving objects that were either natural or artificial. Participants classified the images according to aliveness, or according to capacity for movement. Movement explained significant variance in the neural data over and above that of aliveness, showing that capacity for movement is an important dimension in the representation of visual objects in humans.
Publisher: Cold Spring Harbor Laboratory
Date: 03-09-2020
DOI: 10.1101/2020.09.02.279042
Abstract: How does the human brain encode visual object categories? Our understanding of this has advanced substantially with the development of multivariate decoding analyses. However, conventional electroencephalography (EEG) decoding predominantly use the “mean” neural activation within the analysis window to extract category information. Such temporal averaging overlooks the within-trial neural variability which is suggested to provide an additional channel for the encoding of information about the complexity and uncertainty of the sensory input. The richness of temporal variabilities, however, has not been systematically compared with the conventional “mean” activity. Here we compare the information content of 31 variability-sensitive features against the “mean” of activity, using three independent highly-varied datasets. In whole-trial decoding, the classical event-related potential (ERP) components of “P2a” and “P2b” provided information comparable to those provided by “Original Magnitude Data (OMD)” and “Wavelet Coefficients (WC)”, the two most informative variability-sensitive features. In time-resolved decoding, the “OMD” and “WC” outperformed all the other features (including “mean”), which were sensitive to limited and specific aspects of temporal variabilities, such as their phase or frequency. The information was more pronounced in Theta frequency band, previously suggested to support feed-forward visual processing. We concluded that the brain might encode the information in multiple aspects of neural variabilities simultaneously e.g. phase, litude and frequency rather than “mean” per se. In our active categorization dataset, we found that more effective decoding of the neural codes corresponds to better prediction of behavioral performance. Therefore, the incorporation of temporal variabilities in time-resolved decoding can provide additional category information and improved prediction of behavior.
Publisher: Proceedings of the National Academy of Sciences
Date: 02-2021
Abstract: Grapheme-color synesthetes experience color when seeing achromatic symbols. We examined whether similar neural mechanisms underlie color perception and synesthetic colors using magnetoencephalography. Classification models trained on neural activity from viewing colored stimuli could distinguish synesthetic color evoked by achromatic symbols after a delay of ∼100 ms. Our results provide an objective neural signature for synesthetic experience and temporal evidence consistent with higher-level processing in synesthesia.
Publisher: Elsevier BV
Date: 08-2006
DOI: 10.1016/J.NEUROIMAGE.2006.03.059
Abstract: In our daily lives, recognizing a familiar object is an effortless and seemingly instantaneous process. Our knowledge of how the brain accomplished this formidable task, however, is quite limited. The present study takes a holistic approach to examining the neural processes that underlie recognition memory. A unique paradigm, in which visual information about the identity of a person or word is slowly titrated to human observers during a functional imaging session, is employed to uncover the dynamics of the visual recognition in the brain. The results of study reveal multiple unique stages in visual recognition that can be dissociated from one another based on temporal asynchronies and hemodynamic response characteristics.
Publisher: Elsevier BV
Date: 06-2019
DOI: 10.1016/J.NEUROPSYCHOLOGIA.2019.04.015
Abstract: The mere presence of information in the brain does not always mean that this information is available to consciousness. Experiments using paradigms such as binocular rivalry, visual masking, and the attentional blink have shown that visual information can be processed and represented by the visual system without reaching consciousness. Using multivariate pattern analysis (MVPA) and magneto-encephalography (MEG), we investigated the temporal dynamics of information processing for unconscious and conscious stimuli. We decoded stimulus information from the brain recordings while manipulating visual consciousness by presenting stimuli at threshold contrast in a backward masking paradigm. Participants' consciousness was measured using both a forced-choice categorisation task and self-report. We show that brain activity during both conscious and non-conscious trials contained stimulus information and that this information was enhanced in conscious trials. Overall, our results indicate that visual consciousness is characterised by enhanced neural activity representing the visual stimulus and that this effect arises as early as 180 ms post-stimulus onset.
Publisher: Cold Spring Harbor Laboratory
Date: 03-03-2020
DOI: 10.1101/2020.03.02.974162
Abstract: Humans can covertly track the position of an object, even if the object is temporarily occluded. What are the neural mechanisms underlying our capacity to track moving objects when there is no physical stimulus for the brain to track? One possibility is that the brain “fills-in” information about imagined objects using internally generated representations similar to those generated by feed-forward perceptual mechanisms. Alternatively, the brain might deploy a higher order mechanism, for ex le using an object tracking model that integrates visual signals and motion dynamics (Kwon et al., 2015). In the present study, we used electroencephalography (EEG) and time-resolved multivariate pattern analyses to investigate the spatial processing of visible and imagined objects. Participants tracked an object that moved in discrete steps around fixation, occupying six consecutive locations. They were asked to imagine that the object continued on the same trajectory after it disappeared and move their attention to the corresponding positions. Time-resolved decoding of EEG data revealed that the location of the visible stimuli could be decoded shortly after image onset, consistent with early retinotopic visual processes. For processing of unseen/imagined positions, the patterns of neural activity resembled stimulus-driven mid-level visual processes, but were detected earlier than perceptual mechanisms, implicating an anticipatory and more variable tracking mechanism. Encoding models revealed that spatial representations were much weaker for imagined than visible stimuli. Monitoring the position of imagined objects thus utilises similar perceptual and attentional processes as monitoring objects that are actually present, but with different temporal dynamics. These results indicate that internally generated representations rely on top-down processes, and their timing is influenced by the predictability of the stimulus. All data and analysis code for this study are available at osf.io/8v47t/ .
Location: United Kingdom of Great Britain and Northern Ireland
Start Date: 06-2013
End Date: 05-2017
Amount: $707,218.00
Funder: Australian Research Council
View Funded ActivityStart Date: 2016
End Date: 12-2019
Amount: $535,117.00
Funder: Australian Research Council
View Funded ActivityStart Date: 03-2020
End Date: 12-2024
Amount: $476,198.00
Funder: Australian Research Council
View Funded Activity