ORCID Profile
0000-0002-6501-2358
Current Organisation
Monash University
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
In Research Link Australia (RLA), "Research Topics" refer to ANZSRC FOR and SEO codes. These topics are either sourced from ANZSRC FOR and SEO codes listed in researchers' related grants or generated by a large language model (LLM) based on their publications.
Artificial Intelligence and Image Processing not elsewhere classified | Central Nervous System | Sensory Systems | Neurosciences
Expanding Knowledge in the Information and Computing Sciences | Expanding Knowledge in the Biological Sciences | Expanding Knowledge in Psychology and Cognitive Sciences |
Publisher: Cold Spring Harbor Laboratory
Date: 10-03-2017
DOI: 10.1101/112037
Abstract: Perception is produced by ‘reading out’ the representation of a sensory stimulus contained in the firing rates of a population of neurons. To examine experimentally how populations code information, a common approach is to decode a linearly-weighted sum of the neurons’ firing rates. This approach is popular because of its biological validity: weights in a computational decoder are analogous to synaptic strengths. For neurons recorded in vivo, weights are highly variable when derived through machine learning methods, but it is unclear what neuronal properties explain this variability, and how the variability affects decoding performance. To address this, we recorded from neurons in the middle temporal area (MT) of anesthetized marmosets ( Callithrix jacchus ) viewing stimuli comprising a sheet of dots that moved coherently in one of twelve different directions. We found that high gain and direction selectivity both predicted that a neuron would be weighted more highly in an optimised decoding model. Although learned weights differed markedly from weights chosen according to a priori rules based on a neuron’s tuning profile, decoding performance was only marginally better for the learned weights. In the models with a priori rules, selectivity is the best predictor of weighting, and defining weights according to a neuron’s preferred direction and selectivity improves decoding performance to very near the maximum level possible, as defined by the learned weights. We examined which aspects of a neuron’s tuning account for its contribution to sensory coding. Strongly direction-selective neurons were weighted most highly by machine learning algorithms trained to discriminate motion direction. Models with a priori defined decoding weights demonstrate that the learned weighting scheme causally improved direction representation by a neuronal population. Optimising decoders (using machine learning) lead to only marginally better performance than decoders based purely on a neuron’s preferred direction and selectivity.
Publisher: Cold Spring Harbor Laboratory
Date: 18-10-2022
DOI: 10.1101/2022.10.16.511648
Abstract: Visual field maps in human early extrastriate areas (V2 and V3) are traditionally thought to form mirror-image representations which surround the primary visual cortex (V1). According to this scheme, V2 and V3 form nearly symmetrical halves with respect to the calcarine sulcus, with the dorsal halves representing lower contralateral quadrants, and the ventral halves representing upper contralateral quadrants. This arrangement is considered to be consistent across in iduals, and thus predictable with reasonable accuracy using templates. However, data that deviate from this expected pattern have been observed, but mainly treated as artifactual. Here we systematically investigate in idual variability in the visual field maps of human early visual cortex using the 7T Human Connectome Project (HCP) retinotopy dataset. Our results demonstrate substantial and principled inter-in idual variability. Visual field representation in the dorsal portions of V2 and V3 was more variable than in their ventral counterparts, including substantial departures from the expected mirror-symmetrical patterns. In addition, left hemisphere retinotopic maps were more variable than those in the right hemisphere. Surprisingly, only one-third of in iduals had maps that conformed to the expected pattern in the left hemisphere. Visual field sign analysis further revealed that in many in iduals the area conventionally identified as dorsal V3 shows a discontinuity in the mirror-image representation of the retina, associated with a Y-shaped lower vertical representation. Our findings challenge the current view that inter-in idual variability in early extrastriate cortex is negligible, and that the dorsal portions of V2 and V3 are roughly mirror images of their ventral counterparts.
Publisher: eLife Sciences Publications, Ltd
Date: 15-08-2023
DOI: 10.7554/ELIFE.86439
Abstract: Visual field maps in human early extrastriate areas (V2 and V3) are traditionally thought to form mirror-image representations which surround the primary visual cortex (V1). According to this scheme, V2 and V3 form nearly symmetrical halves with respect to the calcarine sulcus, with the dorsal halves representing lower contralateral quadrants, and the ventral halves representing upper contralateral quadrants. This arrangement is considered to be consistent across in iduals, and thus predictable with reasonable accuracy using templates. However, data that deviate from this expected pattern have been observed, but mainly treated as artifactual. Here, we systematically investigate in idual variability in the visual field maps of human early visual cortex using the 7T Human Connectome Project (HCP) retinotopy dataset. Our results demonstrate substantial and principled inter-in idual variability. Visual field representation in the dorsal portions of V2 and V3 was more variable than in their ventral counterparts, including substantial departures from the expected mirror-symmetrical patterns. In addition, left hemisphere retinotopic maps were more variable than those in the right hemisphere. Surprisingly, only one-third of in iduals had maps that conformed to the expected pattern in the left hemisphere. Visual field sign analysis further revealed that in many in iduals the area conventionally identified as dorsal V3 shows a discontinuity in the mirror-image representation of the retina, associated with a Y-shaped lower vertical representation. Our findings challenge the current view that inter-in idual variability in early extrastriate cortex is negligible, and that the dorsal portions of V2 and V3 are roughly mirror images of their ventral counterparts.
Publisher: Cold Spring Harbor Laboratory
Date: 08-02-2022
DOI: 10.1101/2022.02.05.479227
Abstract: Temporal information is ubiquitous in natural vision and must be represented accurately in the brain to allow us to interact with a constantly changing world. Recent studies have employed a random stimulation paradigm to map the temporal response function (TRF) to luminance changes in the human EEG. This approach has revealed that the visual system, when presented with broadband visual input, actively selects distinct temporal frequencies, and retains their phase-information for prolonged periods of time. This non-linear response likely originates in primary visual cortex (V1), yet, so far it has not been investigated on a neural level. Here, we characterize the steady-state response to random broadband visual flicker in marmoset V1. In two experiments, we recorded from i) marmosets passively stimulated under general anesthesia, and ii) awake marmosets, under free viewing conditions. Our results show that LFP coupling to the stimulus was broadband and unselective under anesthesia, whereas in awake animals, it was restricted to two distinct frequency components, in the alpha and beta range. Within these frequency bands, coupling adhered to the receptive field (RF) boundaries of the local populations. The responses outside the RF did not provide evidence for a propagation of stimulus information across the cortex, contrary to results in human EEG studies. This result may be explained by short fixation durations, warranting further investigation. In summary, our findings show that during awake behavior V1 neural responses to broadband information are selective for distinct frequency bands, and that this selectivity is likely controlled actively by top-down mechanisms.
Publisher: Elsevier BV
Date: 10-2013
DOI: 10.1016/J.VISRES.2013.07.018
Abstract: Texture boundary segmentation is typically thought to reflect a comparison of differences in Fourier energy (i.e. low-order texture statistics) on either side of a boundary. However in a previous study (Arsenault, Yoonessi, & Baker, 2011) we showed that the distribution of energy within a natural texture (i.e. its higher-order statistical structure) also influences segmentation of contrast boundaries. Here we examine the influence of specific higher-order texture statistics on segmentation of contrast- and orientation-defined boundaries. Using naturalistic synthetic textures to manipulate the sparseness, global phase structure, and local phase alignments of carrier textures, we measure segmentation thresholds based on forced-choice judgments of boundary orientation. We find a similar pattern of results for both contrast and orientation boundaries: (1) randomizing all structure by globally phase scrambling the texture reduces segmentation thresholds substantially, (2) decreasing sparseness also reduces thresholds, and (3) removing local phase alignments has little or no effect on segmentation thresholds. We show that a two-stage filter model with an intermediate compressive nonlinearity and expansive output nonlinearity can account for these data using synthetic textures. Furthermore, the model parameter fits obtained using synthetic textures also predict the segmentation thresholds presented in Arsenault, Yoonessi, and Baker (2011) for natural and phase-scrambled natural texture carriers.
Publisher: Elsevier
Date: 2017
Publisher: Oxford University Press (OUP)
Date: 28-12-2019
Abstract: Sensory perception depends on neuronal populations creating an accurate representation of the external world. The amount of information that a population can represent depends on the tuning of in idual neurons and the trial-by-trial variability shared among neurons. Although on average, pairwise spike-count correlations between neurons are positive, the distribution is wide, and the relationship between correlations and encoding is not straightforward. Here, we examine how single-neuron and population-level factors impact the efficacy of the neural code. We recorded responses to moving visual stimuli from motion-sensitive neurons in the middle temporal area of anesthetized marmosets (Callithrix jacchus) and trained decoders to assess how correlated and uncorrelated populations encoded stimulus motion direction. We found that the most responsive, direction-selective, and least variable neurons are the most relied-upon neurons in an uncorrelated population. In correlated populations, the same neurons do the most to shape the shared variability across the population in a way that facilitates decoding, and decoding is improved by the presence of temporally stable correlations. This suggests that the least variable neurons with the strongest stimulus representations enhance the population code by providing a strong signal and shaping correlations in variability orthogonally to the locus defined by the mean response.
Publisher: Elsevier
Date: 2017
Publisher: American Physiological Society
Date: 05-2019
Abstract: Perception is produced by “reading out” the representation of a sensory stimulus contained in the activity of a population of neurons. To examine experimentally how populations code information, a common approach is to decode a linearly weighted sum of the neurons’ spike counts. This approach is popular because of the biological plausibility of weighted, nonlinear integration. For neurons recorded in vivo, weights are highly variable when derived through optimization methods, but it is unclear how the variability affects decoding performance in practice. To address this, we recorded from neurons in the middle temporal area (MT) of anesthetized marmosets ( Callithrix jacchus) viewing stimuli comprising a sheet of dots that moved coherently in 1 of 12 different directions. We found that high peak response and direction selectivity both predicted that a neuron would be weighted more highly in an optimized decoding model. Although learned weights differed markedly from weights chosen according to a priori rules based on a neuron’s tuning profile, decoding performance was only marginally better for the learned weights. In the models with a priori rules, selectivity is the best predictor of weighting, and defining weights according to a neuron’s preferred direction and selectivity improves decoding performance to very near the maximum level possible, as defined by the learned weights. NEW & NOTEWORTHY We examined which aspects of a neuron’s tuning account for its contribution to sensory coding. Strongly direction-selective neurons are weighted most highly by optimal decoders trained to discriminate motion direction. Models with predefined decoding weights demonstrate that this weighting scheme causally improved direction representation by a neuronal population. Optimizing decoders (using a generalized linear model or Fisher’s linear discriminant) led to only marginally better performance than decoders based purely on a neuron’s preferred direction and selectivity.
Publisher: Wiley
Date: 21-02-2010
Publisher: Society for Neuroscience
Date: 20-04-2016
DOI: 10.1523/JNEUROSCI.4563-15.2016
Abstract: Each visual experience changes the neural response to subsequent stimuli. If the brain is unable to incorporate these encoding changes, the decoding, or perception, of subsequent stimuli is biased. Although the phenomenon of adaptation pervades the nervous system, its effects have been studied mainly in isolation, based on neuronal encoding changes induced by an isolated, prolonged stimulus. To understand how adaptation-induced biases arise and persist under continuous, naturalistic stimulation, we simultaneously recorded the responses of up to 61 neurons in the marmoset ( Callithrix jacchus ) middle temporal area to a sequence of directions that changed every 500 ms. We found that direction-specific adaptation following only 0.5 s of stimulation strongly affected encoding for up to 2 s by reducing both the gain and the spike count correlations between pairs of neurons with preferred directions close to the adapting direction. In addition, smaller changes in bandwidth and preferred direction were observed in some animals. Decoding in idual trials of adaptation-affected activity in simultaneously recorded neurons predicted repulsive biases that are consistent with the direction aftereffect. Surprisingly, removing spike count correlations by trial shuffling did not impact decoding performance or bias. When adaptation had the largest effect on encoding, the decoder made the most errors. This suggests that neural and perceptual repulsion is not a mechanism to enhance perceptual performance but is instead a necessary consequence of optimizing neural encoding for the identification of a wide range of stimulus properties in erse temporal contexts. SIGNIFICANCE STATEMENT Although perception depends upon decoding the pattern of activity across a neuronal population, the encoding properties of in idual neurons are unreliable: a single neuron's response to repetitions of the same stimulus is variable, and depends on both its spatial and temporal context. In this manuscript, we describe the complete cascade of adaptation-induced effects in sensory encoding and show how they predict population decoding errors consistent with perceptual biases. We measure the time course of adaptation-induced changes to the response properties of neurons in isolation, and to the correlation structure across pairs of simultaneously recorded neurons. These results provide novel insight into how and for how long adaptation affects the neural code, particularly during continuous, naturalistic vision.
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 07-09-2011
DOI: 10.1167/11.10.1
Abstract: It was recently shown that expert face perception relies on the extraction of horizontally oriented visual cues. Picture-plane inversion was found to eliminate horizontal, suggesting that this tuning contributes to the specificity of face processing. The present experiments sought to determine the spatial frequency (SF) scales supporting the horizontal tuning of face perception. Participants were instructed to match upright and inverted faces that were filtered both in the frequency and orientation domains. Faces in a pair contained horizontal or vertical ranges of information in low, middle, or high SF (LSF, MSF, or HSF). Our findings confirm that upright (but not inverted) face perception is tuned to horizontal orientation. Horizontal tuning was the most robust in the MSF range, next in the HSF range, and absent in the LSF range. Moreover, face inversion selectively disrupted the ability to process horizontal information in MSF and HSF ranges. This finding was replicated even when task difficulty was equated across orientation and SF at upright orientation. Our findings suggest that upright face perception is tuned to horizontally oriented face information carried by intermediate and high SF bands. They further indicate that inversion alters the s ling of face information both in the orientation and SF domains.
Publisher: eLife Sciences Publications, Ltd
Date: 04-07-2023
Publisher: Springer Science and Business Media LLC
Date: 26-02-2019
DOI: 10.1038/S41467-019-08894-8
Abstract: Sensory systems face a barrage of stimulation that continually changes along multiple dimensions. These simultaneous changes create a formidable problem for the nervous system, as neurons must dynamically encode each stimulus dimension, despite changes in other dimensions. Here, we measured how neurons in visual cortex encode orientation following changes in luminance and contrast, which are critical for visual processing, but nuisance variables in the context of orientation coding. Using information theoretic analysis and population decoding approaches, we find that orientation discriminability is luminance and contrast dependent, changing over time due to firing rate adaptation. We also show that orientation discrimination in human observers changes during adaptation, in a manner consistent with the neuronal data. Our results suggest that adaptation does not maintain information rates per se, but instead acts to keep sensory systems operating within the limited dynamic range afforded by spiking activity, despite a wide range of possible inputs.
Publisher: American Association for the Advancement of Science (AAAS)
Date: 30-10-2020
Abstract: Twisted visual maps emerge from a model that balances topographical continuity within and between areas.
Publisher: Cold Spring Harbor Laboratory
Date: 26-06-2019
DOI: 10.1101/682187
Abstract: Adjacent neurons in visual cortex have overlapping receptive fields within and across area boundaries, an arrangement which is theorized to minimize wiring cost. This constraint is thought to create retinotopic maps of opposing field sign (mirror and non-mirror representations of the visual field) in adjacent visual areas, a concept which has become central in current attempts to sub ide the cortex. We modelled a realistic developmental scenario in which adjacent areas do not mature simultaneously, but need to maintain topographic continuity across their borders. This showed that the same mechanism that is hypothesized to maintain topographic continuity within each area can lead to a more complex type of retinotopic map, consisting of sectors with opposing field sign within a same area. Using fully quantitative electrode array recordings, we then demonstrate that this type of map exists in the primate extrastriate cortex.
Publisher: Figshare
Date: 2018
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 2014
DOI: 10.1167/14.4.14
Abstract: Lower order image statistics, which can be described by an image's Fourier energy content, enable segmentation when they are different on either side of a boundary. We have previously demonstrated that the spatial distribution of the energy in an image (described by its higher order statistics or structure) could influence segmentation thresholds for contrast- and orientation-defined boundaries, even though it was the same on either side of the boundary and thus task irrelevant (Zavitz & Baker, 2013). Here we examined whether higher order statistics can also enable segmentation when boundaries are defined by differences in structure or density of texture elements. We used micropattern-based naturalistic synthetic textures to manipulate the sparseness, global phase alignment, and local phase alignment of carrier textures and measured segmentation thresholds based on forced-choice judgments of boundary orientation. We found that both global phase structure and sparseness, but not local phase alignment, enable segmentation and that local structure also has a small effect on segmentation thresholds in both cases. Simulations of a two-stage filter model with a compressive intermediate nonlinearity can reproduce the major features of the experimental data, segmenting boundaries defined by higher order statistics alone while capturing the influence of global image structure on segmentation thresholds.
Publisher: Frontiers Media SA
Date: 09-01-2019
Publisher: American Physiological Society
Date: 2021
Abstract: Behavior and cognition in humans and other primates rely on networks of brain areas guided by the frontal cortex. The marmoset offers exciting new opportunities to study links between brain physiology and behavior, but the functions of frontal cortex areas are still being identified in this species. Here, we provide the first evidence of visual receptive fields in the marmoset dorsolateral frontal cortex, an important step toward future studies of visual cognitive behavior.
Start Date: 11-2021
End Date: 12-2024
Amount: $492,586.00
Funder: Australian Research Council
View Funded Activity