ORCID Profile
0000-0002-8499-8394
Current Organisation
University of Melbourne
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
In Research Link Australia (RLA), "Research Topics" refer to ANZSRC FOR and SEO codes. These topics are either sourced from ANZSRC FOR and SEO codes listed in researchers' related grants or generated by a large language model (LLM) based on their publications.
Sensory Processes, Perception and Performance | Sensory Systems | Psychology | Neural, Evolutionary and Fuzzy Computation | Philosophy of Mind (excl. Cognition) | Decision Making | Artificial Intelligence and Image Processing
Expanding Knowledge in Psychology and Cognitive Sciences | Expanding Knowledge in Engineering | Nervous System and Disorders | Expanding Knowledge in the Biological Sciences |
Publisher: Springer Science and Business Media LLC
Date: 29-12-2009
Publisher: Elsevier BV
Date: 03-2008
DOI: 10.1016/J.VISRES.2007.12.019
Abstract: An object moving in discrete steps can appear to move continuously even along sections of the path in which no stimulus is presented. We investigated whether the internal representation of such an object is constructed by extrapolation, along the expected trajectory of the object, or by interpolation, after the subsequent reappearance of the object. Observers viewed two discs moving in an unambiguous apparent motion display, which either occasionally reversed direction or continued moving along the predicted path. Observers carried out a speeded 2AFC task on probes presented between the possible disc locations. In the continuous condition, observers' reaction times to detect and identify a probe were longer when it occurred ahead of the disc than when it occurred elsewhere on the motion path. Conversely, when the disc reversed direction, significantly less interference was observed ahead of the disc (along the predicted motion path), and significantly more interference was observed behind the disc (along the updated motion path). We conclude that the representation of a moving object in an apparent motion display is constructed by interpolation as well as extrapolation. We demonstrate that this representation is maintained and updated even outside the locus of focused attention, and that it is possible to dissociate the contributions of interpolation and extrapolation mechanisms to an object's representation.
Publisher: Elsevier BV
Date: 09-2017
DOI: 10.1016/J.VISRES.2017.06.001
Abstract: Previous research has shown that when a moving stimulus is presented to a moving observer, the perceived speed of the stimulus is affected by vestibular self-motion signals (Hogendoorn, Verstraten, MacDougall, & Alais, 2017. Vision Research 130, 22-30.). This interaction was interpreted as a weighted sum of visual and vestibular motion signals. This interpretation also predicts effects of vestibular self-motion signals on perceived speed. Here, we test this prediction in two experiments. In Experiment 1, moving observers carried out a visual speed discrimination task in order to establish points of subjective equality (PSE) between stimuli presented in the same or opposite direction of self-motion. We observed robust effects of self-motion on perceived speed, with self-motion in the same direction as visual motion resulting in increases in perceived speed and vice versa. These effects were well- described by a limited-width integration window. In Experiment 2, the same observers carried out another speed discrimination task in order to establish discrimination thresholds. According to the Weber-Fechner law, these thresholds are expected to increase or decrease along with perceived speed. However, no effect of self-motion on discrimination thresholds was observed. This pattern of results suggests a limit on speed discrimination performance early in the visual system, with visuo-vestibular integration in later downstream areas. These results are consistent with previous work on heading perception.
Publisher: Elsevier BV
Date: 10-2018
DOI: 10.1016/J.NEUROIMAGE.2017.06.068
Abstract: Recent progress in understanding the structure of neural representations in the cerebral cortex has centred around the application of multivariate classification analyses to measurements of brain activity. These analyses have proved a sensitive test of whether given brain regions provide information about specific perceptual or cognitive processes. An exciting extension of this approach is to infer the structure of this information, thereby drawing conclusions about the underlying neural representational space. These approaches rely on exploratory data-driven dimensionality reduction to extract the natural dimensions of neural spaces, including natural visual object and scene representations, semantic and conceptual knowledge, and working memory. However, the efficacy of these exploratory methods is unknown, because they have only been applied to representations in brain areas for which we have little or no secondary knowledge. One of the best-understood areas of the cerebral cortex is area MT of primate visual cortex, which is known to be important in motion analysis. To assess the effectiveness of dimensionality reduction for recovering neural representational space we applied several dimensionality reduction methods to multielectrode measurements of spiking activity obtained from area MT of marmoset monkeys, made while systematically varying the motion direction and speed of moving stimuli. Despite robust tuning at in idual electrodes, and high classifier performance, dimensionality reduction rarely revealed dimensions for direction and speed. We use this ex le to illustrate important limitations of these analyses, and suggest a framework for how to best apply such methods to data where the structure of the neural representation is unknown.
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 21-11-2007
DOI: 10.1167/7.14.2
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 03-11-2011
DOI: 10.1167/11.13.4
Abstract: The "neural correlate" of perceptual awareness is much sought-after. Here, we present an novel approach to the identification of possible neural correlates, in which we exploit the temporal connection that inevitably links the selection process that determines what we become aware of, and the development of awareness itself. Because the speed of selection determines when downstream processes can first become involved in generating awareness, the latency of neural processes provides a way to isolate the neural correlates of awareness. We recorded event-related potentials (ERPs) while observers carried out a visual behavioral task designed to estimate attentional selection latency. We show that within-task trial-by-trial behavioral variability in attentional selection latency correlates to trial-by-trial variability in ERP latency. This was true in a posterior contralateral region, and in central and frontal areas, thereby implicating these as waypoints along which visual information flows on the way to visual awareness.
Publisher: Elsevier BV
Date: 2011
DOI: 10.1016/J.CORTEX.2009.08.015
Abstract: The present study examined the coding of spatial position in object selective cortex. Using functional magnetic resonance imaging (fMRI) and pattern classification analysis, we find that three areas in object selective cortex, the lateral occipital cortex area (LO), the fusiform face area (FFA), and the parahippoc al place area (PPA), robustly code the spatial position of objects. The analysis further revealed several anisotropies (e.g., horizontal/vertical asymmetry) in the representation of visual space in these areas. Finally, we show that the representation of information in these areas permits object category information to be extracted across varying locations in the visual field a finding that suggests a potential neural solution to accomplishing translation invariance.
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 07-09-2011
DOI: 10.1167/11.10.1
Abstract: It was recently shown that expert face perception relies on the extraction of horizontally oriented visual cues. Picture-plane inversion was found to eliminate horizontal, suggesting that this tuning contributes to the specificity of face processing. The present experiments sought to determine the spatial frequency (SF) scales supporting the horizontal tuning of face perception. Participants were instructed to match upright and inverted faces that were filtered both in the frequency and orientation domains. Faces in a pair contained horizontal or vertical ranges of information in low, middle, or high SF (LSF, MSF, or HSF). Our findings confirm that upright (but not inverted) face perception is tuned to horizontal orientation. Horizontal tuning was the most robust in the MSF range, next in the HSF range, and absent in the LSF range. Moreover, face inversion selectively disrupted the ability to process horizontal information in MSF and HSF ranges. This finding was replicated even when task difficulty was equated across orientation and SF at upright orientation. Our findings suggest that upright face perception is tuned to horizontally oriented face information carried by intermediate and high SF bands. They further indicate that inversion alters the s ling of face information both in the orientation and SF domains.
Publisher: Springer Science and Business Media LLC
Date: 19-02-2011
Publisher: Frontiers Media SA
Date: 2010
Publisher: Elsevier BV
Date: 05-2018
DOI: 10.1016/J.NEUROIMAGE.2017.12.063
Abstract: Due to the delays inherent in neuronal transmission, our awareness of sensory events necessarily lags behind the occurrence of those events in the world. If the visual system did not compensate for these delays, we would consistently mislocalize moving objects behind their actual position. Anticipatory mechanisms that might compensate for these delays have been reported in animals, and such mechanisms have also been hypothesized to underlie perceptual effects in humans such as the Flash-Lag Effect. However, to date no direct physiological evidence for anticipatory mechanisms has been found in humans. Here, we apply multivariate pattern classification to time-resolved EEG data to investigate anticipatory coding of object position in humans. By comparing the time-course of neural position representation for objects in both random and predictable apparent motion, we isolated anticipatory mechanisms that could compensate for neural delays when motion trajectories were predictable. As well as revealing an early neural position representation (lag 80-90 ms) that was unaffected by the predictability of the object's trajectory, we demonstrate a second neural position representation at 140-150 ms that was distinct from the first, and that was pre-activated ahead of the moving object when it moved on a predictable trajectory. The latency advantage for predictable motion was approximately 16 ± 2 ms. To our knowledge, this provides the first direct experimental neurophysiological evidence of anticipatory coding in human vision, revealing the time-course of predictive mechanisms without using a spatial proxy for time. The results are numerically consistent with earlier animal work, and suggest that current models of spatial predictive coding in visual cortex can be effectively extended into the temporal domain.
Publisher: Elsevier BV
Date: 2017
DOI: 10.1016/J.VISRES.2016.11.003
Abstract: Adaptation to the duration of a visual stimulus causes the perceived duration of a subsequently presented stimulus with a slightly different duration to be skewed away from the adapted duration. This pattern of repulsion following adaptation is similar to that observed for other visual properties, such as orientation, and is considered evidence for the involvement of duration-selective mechanisms in duration encoding. Here, we investigated whether the encoding of duration - by duration-selective mechanisms - occurs early on in the visual processing hierarchy. To this end, we investigated the spatial specificity of the duration after-effect in two experiments. We measured the duration after-effect at adapter-test distances ranging between 0 and 15° of visual angle and for within- and between-hemifield presentations. We replicated the duration after-effect: the test stimulus was perceived to have a longer duration following adaptation to a shorter duration, and a shorter duration following adaptation to a longer duration. Importantly, this duration after-effect occurred at all measured distances, with no evidence for a decrease in the magnitude of the after-effect at larger distances or across hemifields. This shows that adaptation to duration does not result from adaptation occurring early on in the visual processing hierarchy. Instead, it seems likely that duration information is a high-level stimulus property that is encoded later on in the visual processing hierarchy.
Publisher: Cold Spring Harbor Laboratory
Date: 04-08-2020
DOI: 10.1101/2020.08.01.232595
Abstract: The fact that the transmission and processing of visual information in the brain takes time presents a problem for the accurate real-time localisation of a moving object. One way this problem might be solved is extrapolation: using an object’s past trajectory to predict its location in the present moment. Here, we investigate how a simulated in silico layered neural network might implement such extrapolation mechanisms, and how the necessary neural circuits might develop. We allowed an unsupervised hierarchical network of velocity-tuned neurons to learn its connectivity through spike-timing dependent plasticity. We show that the temporal contingencies between the different neural populations that are activated by an object as it moves causes the receptive fields of higher-level neurons to shift in the direction opposite to their preferred direction of motion. The result is that neural populations spontaneously start to represent moving objects as being further along their trajectory than where they were physically detected. Due to the inherent delays of neural transmission, this effectively compensates for (part of) those delays by bringing the represented position of a moving object closer to its instantaneous position in the world. Finally, we show that this model accurately predicts the pattern of perceptual mislocalisation that arises when human observers are required to localise a moving object relative to a flashed static object (the flash-lag effect). Our ability to track and respond to rapidly changing visual stimuli, such as a fast moving tennis ball, indicates that the brain is capable of extrapolating the trajectory of a moving object in order to predict its current position, despite the delays that result from neural transmission. Here we show how the neural circuits underlying this ability can be learned through spike-timing dependent synaptic plasticity, and that these circuits emerge spontaneously and without supervision. This demonstrates how the neural transmission delays can, in part, be compensated to implement the extrapolation mechanisms required to predict where a moving object is at the present moment.
Publisher: Elsevier BV
Date: 09-2017
DOI: 10.1016/J.VISRES.2017.05.012
Abstract: Human observers maintain a representation of the visual features of objects when they become occluded. This representation facilitates the interpretation of occluded events and allows us to quickly identify objects upon reappearing. Here we investigated whether visual features that change over time are also represented during occlusion. To answer this question we used an illusion from the time perception domain in which the perceived duration of an event increases as its temporal frequency content increases. In the first experiment we demonstrate temporal frequency induced modulation of duration both when the object remains visible as well as when it becomes temporarily occluded. Additionally, we demonstrate that time dilation for temporarily occluded objects cannot be explained by modulations of duration as a result of pre- and post-occlusion presentation of the object. In a second experiment, we corroborate this finding by demonstrating that modulation of the perceived duration of occluded events depends on the expected temporal frequency content of the object during occlusion. Together these results demonstrate that the dynamic properties of an object are represented during occlusion. We conclude that the representations of occluded objects contain a wide range of features derived from the period when the object was still visible, including information about both the static and dynamic properties of the object.
Publisher: Elsevier BV
Date: 2017
DOI: 10.1016/J.VISRES.2016.11.002
Abstract: Certain visual stimuli can have two possible interpretations. These perceptual interpretations may alternate stochastically, a phenomenon known as bistability. Some classes of bistable stimuli, including binocular rivalry, are sensitive to bias from input through other modalities, such as sound and touch. Here, we address the question whether bistable visual motion stimuli, known as plaids, are affected by vestibular input that is caused by self-motion. In Experiment 1, we show that a vestibular self-motion signal biases the interpretation of the bistable plaid, increasing or decreasing the likelihood of the plaid being perceived as globally coherent or transparently sliding depending on the relationship between self-motion and global visual motion directions. In Experiment 2, we find that when the vestibular direction is orthogonal to the visual direction, the vestibular self-motion signal also biases the direction of one-dimensional motion. This interaction suggests that the effect in Experiment 1 is due to the self-motion vector adding to the visual motion vectors. Together, this demonstrates that the perception of visual motion direction can be systematically affected by concurrent but uninformative and task-irrelevant vestibular input caused by self-motion.
Publisher: Society for Neuroscience
Date: 22-04-2021
DOI: 10.1523/JNEUROSCI.2017-20.2021
Abstract: The fact that the transmission and processing of visual information in the brain takes time presents a problem for the accurate real-time localization of a moving object. One way this problem might be solved is extrapolation: using an object's past trajectory to predict its location in the present moment. Here, we investigate how a simulated in silico layered neural network might implement such extrapolation mechanisms, and how the necessary neural circuits might develop. We allowed an unsupervised hierarchical network of velocity-tuned neurons to learn its connectivity through spike-timing-dependent plasticity (STDP). We show that the temporal contingencies between the different neural populations that are activated by an object as it moves causes the receptive fields of higher-level neurons to shift in the direction opposite to their preferred direction of motion. The result is that neural populations spontaneously start to represent moving objects as being further along their trajectory than where they were physically detected. Because of the inherent delays of neural transmission, this effectively compensates for (part of) those delays by bringing the represented position of a moving object closer to its instantaneous position in the world. Finally, we show that this model accurately predicts the pattern of perceptual mislocalization that arises when human observers are required to localize a moving object relative to a flashed static object (the flash-lag effect FLE). SIGNIFICANCE STATEMENT Our ability to track and respond to rapidly changing visual stimuli, such as a fast-moving tennis ball, indicates that the brain is capable of extrapolating the trajectory of a moving object to predict its current position, despite the delays that result from neural transmission. Here, we show how the neural circuits underlying this ability can be learned through spike-timing-dependent synaptic plasticity and that these circuits emerge spontaneously and without supervision. This demonstrates how the neural transmission delays can, in part, be compensated to implement the extrapolation mechanisms required to predict where a moving object is at the present moment.
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 15-12-2006
DOI: 10.1167/6.12.8
Publisher: Elsevier BV
Date: 02-2022
DOI: 10.1016/J.TICS.2021.11.003
Abstract: We feel that we perceive events in the environment as they unfold in real-time. However, this intuitive view of perception is impossible to implement in the nervous system due to biological constraints such as neural transmission delays. I propose a new way of thinking about real-time perception: at any given moment, instead of representing a single timepoint, perceptual mechanisms represent an entire timeline. On this timeline, predictive mechanisms predict ahead to compensate for delays in incoming sensory input, and reconstruction mechanisms retroactively revise perception when those predictions do not come true. This proposal integrates and extends previous work to address a crucial gap in our understanding of a fundamental aspect of our everyday life: the experience of perceiving the present.
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 12-12-2006
DOI: 10.1167/6.12.6
Publisher: Elsevier BV
Date: 11-2013
DOI: 10.1016/J.NEUROIMAGE.2013.06.034
Abstract: In the motion aftereffect (MAE), adapting to a moving stimulus causes a subsequently presented stationary stimulus to appear to move in the opposite direction. Recently, the neural basis of the motion aftereffect has received considerable interest, and a number of brain areas have been implicated in the generation of the illusory motion. Here, we use functional magnetic resonance imaging in combination with multivariate pattern classification to directly compare the neural activity evoked during the observation of both real and illusory motions. We show that the perceived illusory motion is not encoded in the same way as real motion in the same direction. Instead, suppression of the adapted direction of motion results in a shift of the population response of motion sensitive neurons in area MT+, resulting in activation patterns that are in fact more similar to real motion in orthogonal, rather than opposite directions. Although robust motion selectivity was observed in visual areas V1, V2, V3, and V4, this MAE-specific modulation of the population response was only observed in area MT+. Implications for our understanding of the motion aftereffect, and models of motion perception in general, are discussed.
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 06-02-2019
DOI: 10.1167/19.2.3
Abstract: Motion-induced position shifts constitute a broad class of visual illusions in which motion and position signals interact in the human visual pathway. In such illusions, the presence of visual motion distorts the perceived positions of objects in nearby space. Predictive mechanisms, which could contribute to compensating for processing delays due to neural transmission, have been given as an explanation. However, such mechanisms have struggled to explain why we do not usually perceive objects extrapolated beyond the end of their trajectory. Advocates of this interpretation have proposed a "correction-for-extrapolation" mechanism to explain this: When the object motion ends abruptly, this mechanism corrects the overextrapolation by shifting the perceived object location backwards to its actual location. However, such a mechanism has so far not been empirically demonstrated. Here, we use a novel version of the flash-grab illusion to demonstrate this mechanism. In the flash-grab effect, a target is flashed on a moving background that abruptly changes direction, leading to the mislocalization of the target. Here, we manipulate the angle of the direction change to dissociate the contributions of the background motion before and after the flash. Consistent with previous reports, we observe that perceptual mislocalization in the flash-grab illusion is mainly driven by motion after the flash. Importantly, however, we reveal a small but consistent mislocalization component in the direction opposite to the direction of the first motion sequence. This provides empirical support for the proposed correction-for-extrapolation mechanism, and therefore corroborates the interpretation that motion-induced position shifts might result from predictive interactions between motion and position signals.
Publisher: Public Library of Science (PLoS)
Date: 04-03-2019
Publisher: Springer Science and Business Media LLC
Date: 06-02-2018
DOI: 10.1038/S41598-018-20850-Y
Abstract: The abundance of temporal information in our environment calls for the effective selection and utilization of temporal information that is relevant for our behavior. Here we investigated whether visual attention gates the selective encoding of relevant duration information when multiple sources of duration information are present. We probed the encoding of duration by using a duration-adaptation paradigm. Participants adapted to two concurrently presented streams of stimuli with different durations, while detecting oddballs in one of the streams. We measured the resulting duration after-effect (DAE) and found that the DAE reflects stronger relative adaptation to attended durations, compared to unattended durations. Additionally, we demonstrate that unattended durations do not contribute to the measured DAE. These results suggest that attention plays a crucial role in the selective encoding of duration: attended durations are encoded, while encoding of unattended durations is either weak or absent.
Publisher: SAGE Publications
Date: 2015
DOI: 10.1068/P7832
Abstract: An important goal of cognitive neuroscience is understanding the neural underpinnings of conscious awareness. Although the low-level processing of sensory input is well understood in most modalities, it remains a challenge to understand how the brain translates such input into conscious awareness. Here, I argue that the application of multivariate pattern classification techniques to neuroimaging data acquired while observers experience perceptual illusions provides a unique way to dissociate sensory mechanisms from mechanisms underlying conscious awareness. Using this approach, it is possible to directly compare patterns of neural activity that correspond to the contents of awareness, independent from changes in sensory input, and to track these neural representations over time at high temporal resolution. I highlight five recent studies using this approach, and provide practical considerations and limitations for future implementations.
Publisher: Elsevier BV
Date: 11-2009
DOI: 10.1016/J.NEUROPSYCHOLOGIA.2009.05.014
Abstract: Conflict between sensory modalities can be resolved by one modality overwriting another. For ex le, movement of a limb that is visible in a stationary visual afterimage results in selective fading of that limb in the afterimage. We investigated the interaction of these two sensory modalities by inducing a mismatch between visual and proprioceptive hand location. Whereas this discrepancy did not affect the initial appearance of the hand in the afterimage, it did prevent subsequent motion with the hand from affecting the hand's appearance. Location mismatch disconnected the visual and proprioceptive experiences of the hand, "protecting" the visual afterimage from interaction with proprioception. Investigation of subjective higher order bodily experiences showed a strong negative correlation between afterimage disruption and the subjective feeling of ownership, suggesting that the brain can resolve multimodal location mismatch by 'disowning' a visible limb, and that the interaction between proprioception and vision is mediated by higher order bodily experiences.
Publisher: Cold Spring Harbor Laboratory
Date: 21-04-2020
DOI: 10.1101/2020.04.20.051334
Abstract: Although visual awareness of an object typically increases neural responses, we identify a neural response that increases prior to perceptual disappearances , and that scales with the amount of invisibility reported during perceptual filling-in. These findings challenge long-held assumptions regarding the neural correlates of consciousness and entrained visually evoked potentials, by showing that the strength of stimulus-specific neural activity can encode the conscious absence of a stimulus. The focus of attention and the contents of consciousness frequently overlap. Yet what happens if this common correlation is broken? To test this, we asked human participants to attend and report on the invisibility of four visual objects which seemed to disappear, yet actually remained on screen. We found that neural activity increased, rather than decreased, when targets became invisible. This coincided with measures of attention that also increased when stimuli disappeared. Together, our data support recent suggestions that attention and conscious perception are distinct and separable. In our experiment, neural measures more strongly follow attention.
Publisher: Springer Science and Business Media LLC
Date: 24-06-2015
Publisher: Society for Neuroscience
Date: 13-08-2018
DOI: 10.1523/JNEUROSCI.0736-18.2018
Abstract: Transmission delays in the nervous system pose challenges for the accurate localization of moving objects as the brain must rely on outdated information to determine their position in space. Acting effectively in the present requires that the brain compensates not only for the time lost in the transmission and processing of sensory information, but also for the expected time that will be spent preparing and executing motor programs. Failure to account for these delays will result in the mislocalization and mistargeting of moving objects. In the visuomotor system, where sensory and motor processes are tightly coupled, this predicts that the perceived position of an object should be related to the latency of saccadic eye movements aimed at it. Here we use the flash-grab effect, a mislocalization of briefly flashed stimuli in the direction of a reversing moving background, to induce shifts of perceived visual position in human observers (male and female). We find a linear relationship between saccade latency and perceived position shift, challenging the classic dissociation between “vision for action” and “vision for perception” for tasks of this kind and showing that oculomotor position representations are either shared with or tightly coupled to perceptual position representations. Altogether, we show that the visual system uses both the spatial and temporal characteristics of an upcoming saccade to localize visual objects for both action and perception. SIGNIFICANCE STATEMENT Accurately localizing moving objects is a computational challenge for the brain due to the inevitable delays that result from neural transmission. To solve this, the brain might implement motion extrapolation, predicting where an object ought to be at the present moment. Here, we use the flash-grab effect to induce perceptual position shifts and show that the latency of imminent saccades predicts the perceived position of the objects they target. This counterintuitive finding is important because it not only shows that motion extrapolation mechanisms indeed work to reduce the behavioral impact of neural transmission delays in the human brain, but also that these mechanisms are closely matched in the perceptual and oculomotor systems.
Publisher: Elsevier BV
Date: 11-2021
Publisher: Society for Neuroscience
Date: 03-2019
DOI: 10.1523/ENEURO.0412-18.2019
Abstract: Hierarchical predictive coding is an influential model of cortical organization, in which sequential hierarchical levels are connected by backward connections carrying predictions, as well as forward connections carrying prediction errors. To date, however, predictive coding models have largely neglected to take into account that neural transmission itself takes time. For a time-varying stimulus, such as a moving object, this means that backward predictions become misaligned with new sensory input. We present an extended model implementing both forward and backward extrapolation mechanisms that realigns backward predictions to minimize prediction error. This realignment has the consequence that neural representations across all hierarchical levels become aligned in real time. Using visual motion as an ex le, we show that the model is neurally plausible, that it is consistent with evidence of extrapolation mechanisms throughout the visual hierarchy, that it predicts several known motion–position illusions in human observers, and that it provides a solution to the temporal binding problem.
Publisher: Cold Spring Harbor Laboratory
Date: 22-03-2021
DOI: 10.1101/2021.03.22.436374
Abstract: In the flash-lag effect (FLE), a flash in spatiotemporal alignment with a moving object is misperceived as lagging behind the moving object. One proposed explanation for this illusion is based on predictive motion extrapolation of trajectories. In this interpretation, the erging effects of velocity on the perceived position of the moving object suggest that FLE might be based on the neural representation of perceived, rather than physical, velocity. By contrast, alternative explanations based on differential latency or temporal averaging would predict that the FLE does not rely on such a representation of perceived velocity. Here we examined whether the FLE is sensitive to illusory changes in perceived speed that result in changes to perceived velocity, while physical speed is constant. The perceived speed of the moving object was manipulated using revolving wedge stimuli with variable pattern textures (Experiment 1) and luminance contrast (Experiment 2). The motion extrapolation interpretation would predict that the changes in FLE magnitude should correspond to the changes in the perceived speed of the moving object. In the current study, two experiments demonstrated that perceived speed and FLE magnitude increased in the dynamic pattern relative to the static pattern conditions, and that the same effect was found in the low contrast compared to the high contrast conditions. These results showed that manipulations of texture and contrast that are known to alter judgments of perceived speed also modulate perceived position. We interpret this as a consequence of motion extrapolation mechanisms and discuss possible explanations for why we observed no cross-effect correlation.
Publisher: Cold Spring Harbor Laboratory
Date: 09-04-2020
DOI: 10.1101/2020.04.08.032888
Abstract: Classic models of predictive coding propose that sensory systems use information retained from prior experience to predict current sensory input. Any mismatch between predicted and current input (prediction error) is then fed forward up the hierarchy leading to a revision of the prediction. We tested this hypothesis in the domain of object vision using a combination of multivariate pattern analysis and time-resolved electroencephalography. We presented participants with sequences of images that stepped around fixation in a predictable order. On the majority of presentations, the images conformed to a consistent pattern of position order and object category order, however, on a subset of presentations the last image in the sequence violated the established pattern by either violating the predicted category or position of the object. Contrary to classic predictive coding when decoding position and category we found no differences in decoding accuracy between predictable and violation conditions. However, consistent with recent extensions of predictive coding, exploratory analyses showed that a greater proportion of predictions was made to the forthcoming position in the sequence than to either the previous position or the position behind the previous position suggesting that the visual system actively anticipates future input as opposed to just inferring current input.
Publisher: Elsevier BV
Date: 08-2015
DOI: 10.1016/J.VISRES.2015.05.005
Abstract: Several visual illusions demonstrate that the neural processing of visual position can be affected by visual motion. Well-known ex les are the flash-lag, flash-drag, and flash-jump effect. However, where and when in the visual processing hierarchy such interactions take place is unclear. Here, we used a variant of the flash-grab illusion (Vision Research 91 (2013), pp. 8-20) to shift the perceived positions of flashed stimuli, and applied multivariate pattern classification to in idual 64-channel EEG trials to dissociate neural signals corresponding to veridical versus perceived position with high temporal resolution. We show illusory effects of motion on perceived position in three separate analyses: (1) A classifier can distinguish different perceived positions of a flashed object, even when the veridical positions are identical. (2) When the perceived positions of two objects presented in different locations become more similar, the classifier performs less well than when they become more different, even if the veridical positions remain unchanged. (3) Finally, a classifier can discriminate the perceived position of an object even when trained on objects presented in physically different positions. These effects are evident as early as 81ms post-stimulus, concurrent with the very first EEG signals indicating that any stimulus is present at all. This finding shows that the illusion must begin at an early level, probably as part of a predominantly feed-forward mechanism, leaving the influence of any recurrent processes to later stages in the development of the effect.
Publisher: Proceedings of the National Academy of Sciences
Date: 21-03-2017
Abstract: Our brain constantly selects salient and/or goal-relevant objects from the visual environment, so that it can operate on neural representations of these objects, but what is the fate of objects that are not selected? Are these discarded so that the brain only has an impoverished nonperceptual representation of them, or does the brain construct perceptually rich representations, even when objects are not consciously accessed by our cognitive system? Here, we answer that question by manipulating the information that enters into awareness, while simultaneously measuring cortical activity using EEG. We show that objects that do not enter consciousness can nevertheless have a neural signature that is indistinguishable from perceptually rich representations that occur for objects that do enter into conscious awareness.
Publisher: MIT Press - Journals
Date: 10-2016
DOI: 10.1162/JOCN_A_00986
Abstract: Visual perception seems continuous, but recent evidence suggests that the underlying perceptual mechanisms are in fact periodic—particularly visual attention. Because visual attention is closely linked to the preparation of saccadic eye movements, the question arises how periodic attentional processes interact with the preparation and execution of voluntary saccades. In two experiments, human observers made voluntary saccades between two placeholders, monitoring each one for the presentation of a threshold-level target. Detection performance was evaluated as a function of latency with respect to saccade landing. The time course of detection performance revealed oscillations at around 4 Hz both before the saccade at the saccade origin and after the saccade at the saccade destination. Furthermore, oscillations before and after the saccade were in phase, meaning that the saccade did not disrupt or reset the ongoing attentional rhythm. Instead, it seems that voluntary saccades are executed as part of an ongoing attentional rhythm, with the eyes in flight during the troughs of the attentional wave. This finding for the first time demonstrates that periodic attentional mechanisms affect not only perception but also overt motor behavior.
Publisher: MIT Press - Journals
Date: 07-2009
Abstract: In the rubber hand illusion (RHI), participants incorporate a rubber hand into a mental representation of one's body. This deceptive feeling of ownership is accompanied by recalibration of the perceived position of the participant's real hand toward the rubber hand. Neuroimaging data suggest involvement of the posterior parietal lobule during induction of the RHI, when recalibration of the real hand toward the rubber hand takes place. Here, we used off-line low-frequency repetitive transcranial magnetic stimulation (rTMS) in a double-blind, sham-controlled within-subjects design to investigate the role of the inferior posterior parietal lobule (IPL) in establishing the RHI directly. Results showed that rTMS over the IPL attenuated the strength of the RHI for immediate perceptual body judgments only. In contrast, delayed perceptual responses were unaffected. Furthermore, ballistic action responses as well as subjective self-reports of feeling of ownership over the rubber hand remained unaffected by rTMS over the IPL. These findings are in line with previous research showing that the RHI can be broken down into dissociable bodily sensations. The illusion does not merely affect the embodiment of the rubber hand but also influences the experience and localization of one's own hand in an independent manner. Finally, the present findings concur with a multicomponent model of somatosensory body representations, wherein the IPL plays a pivotal role in subserving perceptual body judgments, but not actions or higher-order affective bodily judgments.
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 10-01-2019
DOI: 10.1167/19.1.3
Abstract: Neural processing of sensory input in the brain takes time, and for that reason our awareness of visual events lags behind their actual occurrence. One way the brain might compensate to minimize the impact of the resulting delays is through extrapolation. Extrapolation mechanisms have been argued to underlie perceptual illusions in which moving and static stimuli are mislocalised relative to one another (such as the flash-lag and related effects). However, where in the visual hierarchy such extrapolation processes take place remains unknown. Here, we address this question by identifying monocular and binocular contributions to the flash-grab illusion. In this illusion, a brief target is flashed on a moving background that reverses direction. As a result, the perceived position of the target is shifted in the direction of the reversal. We show that the illusion is attenuated, but not eliminated, when the motion reversal and the target are presented dichoptically to separate eyes. This reveals extrapolation mechanisms at both monocular and binocular processing stages contribute to the illusion. We interpret the results in a hierarchical predictive coding framework, and argue that prediction errors in this framework manifest directly as perceptual illusions.
Publisher: Cold Spring Harbor Laboratory
Date: 17-05-2021
DOI: 10.1101/2021.05.16.444383
Abstract: Obesity has become a significant problem word-wide and is strongly linked to poor food choices. Even in healthy in iduals, taste perceptions often drive dietary decisions more strongly than healthiness. This study tested whether health and taste representations can be directly decoded from brain activity, both when explicitly considered, and when implicitly processed for decision-making. We used multivariate support vector regression for event-related potentials (as measured by the electroencephalogram) occurring in the first second of food cue processing to predict ratings of tastiness and healthiness. In Experiment 1, 37 healthy participants viewed images of various foods and explicitly rated their tastiness and healthiness, whereas in Experiment 2, 89 healthy participants indicated their desire to consume snack foods, with no explicit instruction to consider tastiness or healthiness. In Experiment 1 both attributes could be decoded, with taste information being available earlier than health. In Experiment 2, both dimensions were also decodable, and their significant decoding preceded the decoding of decisions (i.e., desire to consume the food). However, in Experiment 2, health representations were decodable earlier than taste representations. These results suggest that health information is activated in the brain during the early stages of dietary decisions, which is promising for designing obesity interventions aimed at quickly activating health awareness.
Publisher: Ovid Technologies (Wolters Kluwer Health)
Date: 08-12-2023
Start Date: 2018
End Date: 2021
Funder: Australian Research Council
View Funded ActivityStart Date: 2022
End Date: 12-2025
Amount: $480,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 2018
End Date: 12-2021
Amount: $224,428.00
Funder: Australian Research Council
View Funded ActivityStart Date: 06-2021
End Date: 07-2025
Amount: $920,275.00
Funder: Australian Research Council
View Funded Activity