ORCID Profile
0000-0001-6601-6855
Current Organisation
UNSW Sydney
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
In Research Link Australia (RLA), "Research Topics" refer to ANZSRC FOR and SEO codes. These topics are either sourced from ANZSRC FOR and SEO codes listed in researchers' related grants or generated by a large language model (LLM) based on their publications.
Sensory Processes, Perception and Performance | Psychology | Pattern Recognition and Data Mining | Knowledge Representation and Machine Learning
Publisher: Elsevier BV
Date: 03-2016
DOI: 10.1016/J.NEUROIMAGE.2016.01.006
Abstract: Object perception involves a range of visual and cognitive processes, and is known to include both a feedfoward flow of information from early visual cortical areas to higher cortical areas, along with feedback from areas such as prefrontal cortex. Previous studies have found that low and high spatial frequency information regarding object identity may be processed over different timescales. Here we used the high temporal resolution of magnetoencephalography (MEG) combined with multivariate pattern analysis to measure information specifically related to object identity in peri-frontal and peri-occipital areas. Using stimuli closely matched in their low-level visual content, we found that activity in peri-occipital cortex could be used to decode object identity from ~80ms post stimulus onset, and activity in peri-frontal cortex could also be used to decode object identity from a later time (~265ms post stimulus onset). Low spatial frequency information related to object identity was present in the MEG signal at an earlier time than high spatial frequency information for peri-occipital cortex, but not for peri-frontal cortex. We additionally used Granger causality analysis to compare feedforward and feedback influences on representational content, and found evidence of both an early feedfoward flow and later feedback flow of information related to object identity. We discuss our findings in relation to existing theories of object processing and propose how the methods we use here could be used to address further questions of the neural substrates underlying object perception.
Publisher: Society for Neuroscience
Date: 22-06-2022
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 20-10-2020
Publisher: Elsevier BV
Date: 05-2005
DOI: 10.1016/J.VISRES.2004.12.016
Abstract: Our visual systems constantly adapt their representation of the environment to match the prevailing input. Adaptation phenomena provide striking ex les of perceptual plasticity and offer valuable insight into the mechanisms of sensory coding. Here, we describe an aftereffect of adaptation to a spatially structured image whereby an unstructured test stimulus takes on illusory structure locally perpendicular to that of the adaptor. Objective measurement of the strength of the aftereffect for different patterns suggests a neural locus of adaptation prior to the extraction of complex form in the visual processing hierarchy, probably at the level of primary visual cortex. This view is supported by further experiments showing that the aftereffect exhibits partial interocular transfer but complete transfer across opposite contrast polarities. However, the aftereffect does show weak position invariance, suggesting that adaptation at higher levels of the visual system may also contribute to the effect.
Publisher: Elsevier BV
Date: 10-2018
DOI: 10.1016/J.NEUROIMAGE.2017.08.019
Abstract: The application of machine learning methods to neuroimaging data has fundamentally altered the field of cognitive neuroscience. Future progress in understanding brain function using these methods will require addressing a number of key methodological and interpretive challenges. Because these challenges often remain unseen and metaphorically "haunt" our efforts to use these methods to understand the brain, we refer to them as "ghosts". In this paper, we describe three such ghosts, situate them within a more general framework from philosophy of science, and then describe steps to address them. The first ghost arises from difficulties in determining what information machine learning classifiers use for decoding. The second ghost arises from the interplay of experimental design and the structure of information in the brain - that is, our methods embody implicit assumptions about information processing in the brain, and it is often difficult to determine if those assumptions are satisfied. The third ghost emerges from our limited ability to distinguish information that is merely decodable from the brain from information that is represented and used by the brain. Each of the three ghosts place limits on the interpretability of decoding research in cognitive neuroscience. There are no easy solutions, but facing these issues squarely will provide a clearer path to understanding the nature of representation and computation in the human brain.
Publisher: Elsevier BV
Date: 08-2022
DOI: 10.1016/J.COGNITION.2022.105172
Abstract: Face detection in human vision relies on a stereotypical pattern of visual features common to different faces. How are these visual features generated in the environment? Here we investigate how characteristic patterns of shading and shadows that occur across the face act as a cue for face detection. We use 3D rendering to isolate facial shading under simulated lighting conditions, comparing the broad patterns of contrast that occur across the face when light arrives from different angles. We find that human performance in discriminating faces from non-face objects using these contrast patterns depends strongly on the lighting direction. In particular, light arriving from above the brow tends to facilitate face detection - consistent with the statistics of real-world lighting environments, in which light commonly arrives more strongly from above. Indeed, in a further experiment, we find that asymmetries in lighting that occur in complex and naturalistic lighting environments produce contrast patterns across the face that facilitate face detection. These effects occurred independent of the lighting direction relative to the viewer, suggesting that cues to face detection emerge from the interaction between face morphology and vertical asymmetries in lighting direction, independent of the viewer's knowledge or expectations about lighting direction. Comparison with the performance of an image classifier suggests that the effects of lighting direction partly reflect differences in image information that result from the interaction between shape and illumination, as well as face detection in human observers being better-tuned to the pattern of shading and shadows that occurs across an upright face that is lit from overhead.
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 05-2010
DOI: 10.1167/10.5.25
Abstract: Mechanisms of color vision in cortex have not been as well characterized as those in sub-cortical areas, particularly in humans. We used fMRI in conjunction with univariate and multivariate (pattern) analysis to test for the initial transformation of sub-cortical inputs by human visual cortex. Subjects viewed each of two patterns modulating in color between orange-cyan or lime-magenta. We tested for higher order cortical representations of color capable of discriminating these stimuli, which were designed so that they could not be distinguished by the postulated L-M and S-(L + M) sub-cortical opponent channels. We found differences both in the average response and in the pattern of activity evoked by these two types of stimuli, across a range of early visual areas. This result implies that sub-cortical chromatic channels are recombined early in cortical processing to form novel representations of color. Our results also suggest a cortical bias for lime-magenta over orange-cyan stimuli, when they are matched for cone contrast and the response they would elicit in the L-M and S-(L + M) opponent channels.
Publisher: Elsevier BV
Date: 10-2018
DOI: 10.1016/J.NEUROIMAGE.2017.06.068
Abstract: Recent progress in understanding the structure of neural representations in the cerebral cortex has centred around the application of multivariate classification analyses to measurements of brain activity. These analyses have proved a sensitive test of whether given brain regions provide information about specific perceptual or cognitive processes. An exciting extension of this approach is to infer the structure of this information, thereby drawing conclusions about the underlying neural representational space. These approaches rely on exploratory data-driven dimensionality reduction to extract the natural dimensions of neural spaces, including natural visual object and scene representations, semantic and conceptual knowledge, and working memory. However, the efficacy of these exploratory methods is unknown, because they have only been applied to representations in brain areas for which we have little or no secondary knowledge. One of the best-understood areas of the cerebral cortex is area MT of primate visual cortex, which is known to be important in motion analysis. To assess the effectiveness of dimensionality reduction for recovering neural representational space we applied several dimensionality reduction methods to multielectrode measurements of spiking activity obtained from area MT of marmoset monkeys, made while systematically varying the motion direction and speed of moving stimuli. Despite robust tuning at in idual electrodes, and high classifier performance, dimensionality reduction rarely revealed dimensions for direction and speed. We use this ex le to illustrate important limitations of these analyses, and suggest a framework for how to best apply such methods to data where the structure of the neural representation is unknown.
Publisher: Elsevier BV
Date: 2020
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 22-04-2013
DOI: 10.1167/13.5.20
Abstract: Attending selectively to changes in our visual environment may help filter less important, unchanging information within a scene. Here, we demonstrate that color changes can go unnoticed even when they occur throughout an otherwise static image. The novelty of this demonstration is that it does not rely upon masking by a visual disruption or stimulus motion, nor does it require the change to be very gradual and restricted to a small section of the image. Using a two-interval, forced-choice change-detection task and an odd-one-out localization task, we showed that subjects were slowest to respond and least accurate (implying that change was hardest to detect) when the color changes were isoluminant, smoothly varying, and asynchronous with one another. This profound change blindness offers new constraints for theories of visual change detection, implying that, in the absence of transient signals, changes in color are typically monitored at a coarse spatial scale.
Publisher: American Physiological Society
Date: 07-2017
Abstract: The middle-temporal area (MT) of primate visual cortex is critical in the analysis of visual motion. Single-unit studies suggest that the response dynamics of neurons within area MT depend on stimulus features, but how these dynamics emerge at the population level, and how feature representations interact, is not clear. Here, we used multivariate classification analysis to study how stimulus features are represented in the spiking activity of populations of neurons in area MT of marmoset monkey. Using representational similarity analysis we distinguished the emerging representations of moving grating and dot field stimuli. We show that representations of stimulus orientation, spatial frequency, and speed are evident near the onset of the population response, while the representation of stimulus direction is slower to emerge and sustained throughout the stimulus-evoked response. We further found a spatiotemporal asymmetry in the emergence of direction representations. Representations for high spatial frequencies and low temporal frequencies are initially orientation dependent, while those for high temporal frequencies and low spatial frequencies are more sensitive to motion direction. Our analyses reveal a complex interplay of feature representations in area MT population response that may explain the stimulus-dependent dynamics of motion vision. NEW & NOTEWORTHY Simultaneous multielectrode recordings can measure population-level codes that previously were only inferred from single-electrode recordings. However, many multielectrode recordings are analyzed using univariate single-electrode analysis approaches, which fail to fully utilize the population-level information. Here, we overcome these limitations by applying multivariate pattern classification analysis and representational similarity analysis to large-scale recordings from middle-temporal area (MT) in marmoset monkeys. Our analyses reveal a dynamic interplay of feature representations in area MT population response.
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 22-03-2021
DOI: 10.1167/JOV.21.3.20
Publisher: Open Science Framework
Date: 2021
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 18-03-2010
DOI: 10.1167/6.6.171
Publisher: Cold Spring Harbor Laboratory
Date: 25-01-2019
DOI: 10.1101/530352
Abstract: Attention is a fundamental brain process by which we selectively prioritize relevant information in our environment. Cognitively, we can employ different methods for selecting visual information for further processing, but the extent to which these are implemented by similar or different neural processes remains unclear. Spatial and feature-selective attention both change the stimulus related information signaled by single-cells and neural populations, but relatively few studies have directly compared the effects of these distinct types of attention. We scanned participants (n=20) using MEG, while they covertly attended to an object on the left or the right of fixation (spatial attention manipulation) and reported the object’s shape or color (feature-selective attention manipulation). We used multivariate pattern classification to measure population stimulus-coding in occipital and frontal areas, for attended and non-attended stimulus features, at attended and non-attended locations. In occipital cortex, we show that both spatial and feature-selective attention enhanced object representations, and the effects of these two attention types interacted multiplicatively. We also found that spatial and feature-selective attention induced qualitatively different patterns of enhancement in occipital cortex for the encoding of stimulus color. Specifically, feature-based attention primarily enhanced small color differences, while spatial attention produced greater enhancement for larger differences. We demonstrate that principles of response-gain and tuning curve sharpening that have been applied to describe the effects of attention at the level of a single neuron can account for these differences. An information flow analysis suggested that these attentional effects may be driven by feedback from frontal areas.
Publisher: Elsevier BV
Date: 11-2020
Publisher: American Physiological Society
Date: 03-2017
Abstract: The human ventral visual pathway is implicated in higher order form processing, but the organizational principles within this region are not yet well understood. Recently, Lafer-Sousa, Conway, and Kanwisher ( J Neurosci 36: 1682–1697, 2016) used functional magnetic resonance imaging to demonstrate that functional responses in the human ventral visual pathway share a broad homology with the those in macaque inferior temporal cortex, providing new evidence supporting the validity of the macaque as a model of the human visual system in this region. In addition, these results give new clues for understanding the organizational principles within the ventral visual pathway and the processing of higher order color and form, suggesting new avenues for research into this cortical region.
Publisher: Open Science Framework
Date: 2021
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 16-03-2010
DOI: 10.1167/5.8.211
Publisher: Cold Spring Harbor Laboratory
Date: 26-05-2021
DOI: 10.1101/2021.05.25.445712
Abstract: Every day, we respond to the dynamic world around us by flexibly choosing actions to meet our goals. This constant problem solving, in familiar settings and in novel tasks, is a defining feature of human behaviour. Flexible neural populations are thought to support this process by adapting to prioritise task-relevant information, driving coding in specialised brain regions toward stimuli and actions that are important for our goal. Accordingly, human fMRI shows that activity patterns in frontoparietal cortex contain more information about visual features when they are task-relevant. However, if this preferential coding drives momentary focus, for ex le to solve each part of a task, it must reconfigure more quickly than we can observe with fMRI. Here we used MVPA with MEG to test for rapid reconfiguration of stimulus information when a new feature becomes relevant within a trial. Participants saw two displays on each trial. They attended to the shape of a first target then the colour of a second, or vice versa, and reported the attended features at a choice display. We found evidence of preferential coding for the relevant features in both trial phases, even as participants shifted attention mid-trial, commensurate with fast sub-trial reconfiguration. However, we only found this pattern of results when the task was difficult, and the stimulus displays contained multiple objects, and not in a simpler task with the same structure. The data suggest that adaptive coding in humans can operate on a fast, sub-trial timescale, suitable for supporting periods of momentary focus when complex tasks are broken down into simpler ones, but may not always do so.
Publisher: Elsevier BV
Date: 07-2020
Publisher: MDPI AG
Date: 08-01-2019
Abstract: Interocular suppression plays an important role in the visual deficits experienced by in iduals with amblyopia. Most neurophysiological and functional MRI studies of suppression in amblyopia have used dichoptic stimuli that overlap within the visual field. However, suppression of the amblyopic eye also occurs when the dichoptic stimuli do not overlap, a phenomenon we refer to as long-range suppression. We used functional MRI to test the hypothesis that long-range suppression reduces neural activity in V1, V2 and V3 in adults with amblyopia, indicative of an early, active inhibition mechanism. Five adults with amblyopia and five controls viewed monocular and dichoptic quadrant stimuli during fMRI. Three of five participants with amblyopia experienced complete perceptual suppression of the quadrants presented to their amblyopic eye under dichoptic viewing. The blood oxygen level dependant (BOLD) responses within retinotopic regions corresponding to amblyopic and fellow eye stimuli were analyzed for response magnitude, time to peak, effective connectivity and stimulus classification. Dichoptic viewing slightly reduced the BOLD response magnitude in amblyopic eye retinotopic regions in V1 and reduced the time to peak response however, the same effects were also present in the non-dominant eye of controls. Effective connectivity was unaffected by suppression, and the results of a classification analysis did not differ significantly between the control and amblyopia groups. Overall, we did not observe a neural signature of long-range amblyopic eye suppression in V1, V2 or V3 using functional MRI in this initial study. This type of suppression may involve higher level processing areas within the brain.
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 27-03-2019
DOI: 10.1167/19.3.11
Abstract: Although visual areas hMT+ and hV4 are considered to have segregated functions for the processing of motion and form within dorsal and ventral streams, respectively, more recent evidence favors some functional overlap. Here we use fMRI-guided online repetitive transcranial magnetic stimulation (rTMS) to test two associated hypotheses: that area hV4 is causally involved in the perception of motion and hMT+ in the perception of static form. We use variations of a common global stimulus to test two dynamic motion-based tasks and two static form-based tasks in ipsilateral and contralateral visual fields. We find that rTMS to both hMT+ and hV4 significantly impairs direction discrimination and causes a perceptual slowing of motion, implicating hV4 in motion perception. Stimulation of hMT+ impairs motion in both visual fields, implying that disruption to one hMT+ disrupts the other with both needed for optimal performance. For the second hypothesis, we find the novel result that hV4 stimulation markedly reduces perceived contrast of a static stimulus. hMT+ stimulation also produces an effect, implicating it in static contrast perception. Our findings are the first to show that rTMS of hV4 can produce a large perceptual effect and, taken together, suggest a less rigid functional segregation between hMT+ and hV4 than previously thought.
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 27-12-2010
DOI: 10.1167/10.9.17
Abstract: "Color constancy" refers to our ability to recognize the color of a surface despite changes in illumination. A range of cues and mechanisms, from receptoral adaptation to higher order cognitive cues, is thought to contribute to our color constancy ability. Here we used psychophysical adaptation to probe for an adaptable representation of surface color. We used stimuli that were matched for cone contrast when averaged over time but were consistent with either a constant scene under changing illumination or a changing scene. The color opponent aftereffect during adaptation to the constant scene was greater than that induced by the changing scene stimulus. Since the stimuli were matched for the responses they would elicit in receptoral mechanisms, the increased aftereffect in the constant scene condition cannot be wholly attributed to adaptation of receptors and neural mechanisms responsive to raw quantal catch. We interpret our result as most parsimoniously explained by the existence of adaptable mechanisms responsive to surface color, most likely located in early visual cortex.
Publisher: Cold Spring Harbor Laboratory
Date: 24-05-2019
DOI: 10.1101/648998
Abstract: Neuroimaging studies investigating human object recognition have largely focused on a relatively small number of object categories, in particular, faces, bodies, scenes, and vehicles. More recent studies have taken a broader focus, investigating hypothesised dichotomies, for ex le animate versus inanimate, and continuous feature dimensions, such as biologically similarity. These studies typically have used stimuli that are clearly identified as animate or inanimate, neglecting objects that may not fit into this dichotomy. We generated a novel stimulus set including standard objects and objects that blur the animate-inanimate dichotomy, for ex le robots and toy animals. We used MEG time-series decoding to study the brain’s emerging representation of these objects. Our analysis examined contemporary models of object coding such as dichotomous animacy, as well as several new higher order models that take into account an object’s capacity for agency (i.e. its ability to move voluntarily) and capacity to experience the world. We show that early brain responses are best accounted for by low-level visual similarity of the objects and shortly thereafter, higher order models of agency/experience best explained the brain’s representation of the stimuli. Strikingly, a model of human-similarity provided the best account for the brain’s representation after an initial perceptual processing phase. Our findings provide evidence for a new dimension of object coding in the human brain – one that has a “human-centric” focus.
Publisher: Open Science Framework
Date: 2021
Publisher: Open Science Framework
Date: 2021
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 02-07-2019
DOI: 10.1167/19.8.9
Publisher: Open Science Framework
Date: 2021
Publisher: Elsevier BV
Date: 06-2008
DOI: 10.1016/J.VISRES.2008.02.023
Abstract: Using the simultaneous tilt illusion [Gibson, J., & Radner, M. (1937). Adaptation, after-effect and contrast in the perception of tilted lines. Journal of Experimental Psychology, 12, 453-467], we investigate the perception of orientation in natural images and textures with similar statistical properties. We show that the illusion increases if observers judge the average orientation of textures rather than sinusoidal gratings. Furthermore, the illusion can be induced by surrounding textures with a broad range of orientations, even those without a clearly perceivable orientation. A robust illusion is induced by natural images, and is increased by randomising the phase spectra of those images. We present a simple model of orientation processing that can accommodate most of our observations.
Publisher: Elsevier BV
Date: 11-2019
DOI: 10.1016/J.NEUROIMAGE.2019.116032
Abstract: fMRI-adaptation is a valuable tool for inferring the selectivity of neural responses. Here we use it in human color vision to test the selectivity of responses to S-cone opponent (blue-yellow), L/M-cone opponent (red-green), and achromatic (Ach) contrast across nine regions of interest in visual cortex. We measure psychophysical adaptation, using comparable stimuli to the fMRI-adaptation, and find significant selective adaptation for all three stimulus types, implying separable visual responses to each. For fMRI-adaptation, we find robust adaptation but, surprisingly, much less selectivity due to high levels of cross-stimulus adaptation in all conditions. For all BY and Ach test/adaptor pairs, selectivity is absent across all ROIs. For RG/Ach stimulus pairs, this paradigm has previously shown selectivity for RG in ventral areas and for Ach in dorsal areas. For chromatic stimulus pairs (RG/BY), we find a trend for selectivity in ventral areas. In conclusion, we find an overall lack of correspondence between BOLD and behavioral adaptation suggesting they reflect different aspects of the underlying neural processes. For ex le, raised cross-stimulus adaptation in fMRI may reflect adaptation of the broadly-tuned normalization pool. Finally, we also identify a longer-timescale adaptation (1h) in both BOLD and behavioral data. This is greater for chromatic than achromatic contrast. The longer-timescale BOLD effect was more evident in the higher ventral areas than in V1, consistent with increasing windows of temporal integration for higher-order areas.
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 09-2018
DOI: 10.1167/18.10.362
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 06-09-2019
DOI: 10.1167/19.10.69
Publisher: SAGE Publications
Date: 09-2019
Publisher: Frontiers Media SA
Date: 2013
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 05-04-2011
DOI: 10.1167/11.4.3
Abstract: The retinotopic organization, position, and functional responsiveness of some early visual cortical areas in human and non-human primates are consistent with their being homologous structures. The organization of other areas remains controversial. A critical debate concerns the potential human homologue of macaque area V4, an area very responsive to colored images: specifically, whether human V4 is ided between ventral and dorsal components, as in the macaque, or whether human V4 is confined to one ventral area. We used fMRI to define these areas retinotopically in human and to test the impact of image color on their responsivity. We found a robust preference for full-color movie segments over a luminance-matched achromatic version in ventral V4 but little or no preference in the vicinity of the putative dorsal counterpart. Contrary to previous reports that visual field coverage in the ventral part of V4 is deficient without the dorsal part, we found that coverage in ventral V4 extended to the lower vertical meridian, including the entire contralateral hemifield. Together these results provide evidence against a dorsal component of human V4. Instead, they are consistent with human V4 being a single, ventral region that is sensitive to the chromatic components of images.
Publisher: SAGE Publications
Date: 22-02-2016
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 09-2016
DOI: 10.1167/16.12.252
Publisher: MIT Press - Journals
Date: 31-03-2022
DOI: 10.1162/JOCN_A_01832
Abstract: Every day, we respond to the dynamic world around us by choosing actions to meet our goals. Flexible neural populations are thought to support this process by adapting to prioritize task-relevant information, driving coding in specialized brain regions toward stimuli and actions that are currently most important. Accordingly, human fMRI shows that activity patterns in frontoparietal cortex contain more information about visual features when they are task-relevant. However, if this preferential coding drives momentary focus, for ex le, to solve each part of a task in turn, it must reconfigure more quickly than we can observe with fMRI. Here, we used multivariate pattern analysis of magnetoencephalography data to test for rapid reconfiguration of stimulus information when a new feature becomes relevant within a trial. Participants saw two displays on each trial. They attended to the shape of a first target then the color of a second, or vice versa, and reported the attended features at a choice display. We found evidence of preferential coding for the relevant features in both trial phases, even as participants shifted attention mid-trial, commensurate with fast subtrial reconfiguration. However, we only found this pattern of results when the stimulus displays contained multiple objects and not in a simpler task with the same structure. The data suggest that adaptive coding in humans can operate on a fast, subtrial timescale, suitable for supporting periods of momentary focus when complex tasks are broken down into simpler ones, but may not always do so.
Publisher: SAGE Publications
Date: 08-2010
Publisher: MIT Press - Journals
Date: 05-01-2022
DOI: 10.1162/JOCN_A_01796
Abstract: Attention can be deployed in different ways: When searching for a taxi in New York City, we can decide where to attend (e.g., to the street) and what to attend to (e.g., yellow cars). Although we use the same word to describe both processes, nonhuman primate data suggest that these produce distinct effects on neural tuning. This has been challenging to assess in humans, but here we used an opportunity afforded by multivariate decoding of MEG data. We found that attending to an object at a particular location and attending to a particular object feature produced effects that interacted multiplicatively. The two types of attention induced distinct patterns of enhancement in occipital cortex, with feature-selective attention producing relatively more enhancement of small feature differences and spatial attention producing relatively larger effects for larger feature differences. An information flow analysis further showed that stimulus representations in occipital cortex were Granger-caused by coding in frontal cortices earlier in time and that the timing of this feedback matched the onset of attention effects. The data suggest that spatial and feature-selective attention rely on distinct neural mechanisms that arise from frontal-occipital information exchange, interacting multiplicatively to selectively enhance task-relevant information.
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 29-10-2010
DOI: 10.1167/10.12.34
Abstract: We used functional magnetic resonance imaging (fMRI) at 3T in human participants to trace the chromatic selectivity of orientation processing through functionally defined regions of visual cortex. Our aim was to identify mechanisms that respond to chromatically defined orientation and to establish whether they are tuned specifically to color or operate in an essentially cue-invariant manner. Using an annular test region surrounded inside and out by an inducing stimulus, we found evidence of sensitivity to orientation defined by red-green (L-M) or blue-yellow (S-cone isolating) chromatic modulations across retinotopic visual cortex and of joint selectivity for color and orientation. The likely mechanisms underlying this selectivity are discussed in terms of orientation-specific lateral interactions and spatial summation within the receptive field.
Start Date: 2022
End Date: 2025
Funder: Australian Research Council
View Funded ActivityStart Date: 2020
End Date: 2022
Funder: Australian Research Council
View Funded ActivityStart Date: 08-2022
End Date: 07-2026
Amount: $554,463.00
Funder: Australian Research Council
View Funded ActivityStart Date: 04-2020
End Date: 05-2024
Amount: $426,979.00
Funder: Australian Research Council
View Funded Activity