ORCID Profile
0000-0002-5487-1755
Current Organisation
University of Sydney
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
Publisher: Frontiers Media SA
Date: 2011
Publisher: Society for Neuroscience
Date: 08-06-2016
Publisher: SAGE Publications
Date: 2015
DOI: 10.1068/P7925
Abstract: Presenting a large optic flow pattern to observers is likely to cause postural sway. However, directional anisotropies have been reported, in that contracting optic flow induces more postural sway than expanding optic flow. Recently, we showed that the biomechanics of the lower leg cannot account for this anisotropy (Holten, Donker, Verstraten, & van der Smagt, 2013, Experimental Brain Research, 228, 117–129). The question we address in the current study is whether differences in visual processing of optic flow directions, in particular the perceptual strength of these directions, mirrors the anisotropy apparent in postural sway. That is, can contracting optic flow be considered to be a perceptually stronger visual stimulus than expanding optic flow? In the current study we use a breaking continuous flash suppression paradigm where we assume that perceptually stronger visual stimuli will break the flash suppression earlier, making the suppressed optic flow stimulus visible sooner. Surprisingly, our results show the opposite, in that expanding optic flow is detected earlier than contracting optic flow.
Publisher: Elsevier BV
Date: 09-2017
DOI: 10.1016/J.VISRES.2017.06.001
Abstract: Previous research has shown that when a moving stimulus is presented to a moving observer, the perceived speed of the stimulus is affected by vestibular self-motion signals (Hogendoorn, Verstraten, MacDougall, & Alais, 2017. Vision Research 130, 22-30.). This interaction was interpreted as a weighted sum of visual and vestibular motion signals. This interpretation also predicts effects of vestibular self-motion signals on perceived speed. Here, we test this prediction in two experiments. In Experiment 1, moving observers carried out a visual speed discrimination task in order to establish points of subjective equality (PSE) between stimuli presented in the same or opposite direction of self-motion. We observed robust effects of self-motion on perceived speed, with self-motion in the same direction as visual motion resulting in increases in perceived speed and vice versa. These effects were well- described by a limited-width integration window. In Experiment 2, the same observers carried out another speed discrimination task in order to establish discrimination thresholds. According to the Weber-Fechner law, these thresholds are expected to increase or decrease along with perceived speed. However, no effect of self-motion on discrimination thresholds was observed. This pattern of results suggests a limit on speed discrimination performance early in the visual system, with visuo-vestibular integration in later downstream areas. These results are consistent with previous work on heading perception.
Publisher: Elsevier BV
Date: 06-2013
DOI: 10.1016/J.VISRES.2013.04.010
Abstract: The ability to detect an object depends on the contrast between the object and its background. Despite this, many models of visual search rely solely on the properties of target and distractors, and do not take the background into account. Yet, both target and distractors have their in idual contrasts with the background. These contrasts generally differ, because the target and distractors are different in at least one feature. Therefore, background is likely to play an important role in visual search. In three experiments we manipulated the properties of the background (luminance, orientation and spatial frequency, respectively) while keeping the target and distractors constant. In the first experiment, in which target and distractors had a different luminance, changing the background luminance had an extensive effect on search times. When background luminance was in between that of the target and distractors, search times were always short. Interestingly, when the background was darker than both the target and the distractors, search times were much longer than when the background was lighter. Manipulating orientation and spatial frequency of the background, on the other hand, resulted in search times that were longest for small target-background differences. Thus, background plays an important role in search. This role depends on the in idual contrast of both target and distractors with the background and the type of feature contrast (luminance, orientation or spatial frequency).
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 03-11-2011
DOI: 10.1167/11.13.4
Abstract: The "neural correlate" of perceptual awareness is much sought-after. Here, we present an novel approach to the identification of possible neural correlates, in which we exploit the temporal connection that inevitably links the selection process that determines what we become aware of, and the development of awareness itself. Because the speed of selection determines when downstream processes can first become involved in generating awareness, the latency of neural processes provides a way to isolate the neural correlates of awareness. We recorded event-related potentials (ERPs) while observers carried out a visual behavioral task designed to estimate attentional selection latency. We show that within-task trial-by-trial behavioral variability in attentional selection latency correlates to trial-by-trial variability in ERP latency. This was true in a posterior contralateral region, and in central and frontal areas, thereby implicating these as waypoints along which visual information flows on the way to visual awareness.
Publisher: SAGE Publications
Date: 28-10-2013
Abstract: To find a target during visual search, observers often need to make multiple eye movements, which results in a scan path. It is an open question whether the saccade destinations in scan paths are planned ahead. In the two experiments reported here, we investigated this question by focusing on the observer’s ability to deviate from potentially planned paths. In the first experiment, the stimulus configuration could change during the initial saccade. We found that the observer’s ability to deviate from potentially planned paths crucially depended on whether altered configurations could be processed with sufficient rapidity. In a follow-up experiment, we asked whether planned paths can include more than two saccade destinations. Investigating the influence of potentially planned paths on a secondary task demonstrated that planned paths can include at least three saccade destinations. Together, these experiments provide the first evidence of scan-path planning in visual search.
Publisher: Elsevier BV
Date: 12-2021
Publisher: Springer Science and Business Media LLC
Date: 04-07-2016
Publisher: SAGE Publications
Date: 10-2021
Publisher: Society for Neuroscience
Date: 23-07-2014
Publisher: Elsevier BV
Date: 2017
DOI: 10.1016/J.VISRES.2016.11.003
Abstract: Adaptation to the duration of a visual stimulus causes the perceived duration of a subsequently presented stimulus with a slightly different duration to be skewed away from the adapted duration. This pattern of repulsion following adaptation is similar to that observed for other visual properties, such as orientation, and is considered evidence for the involvement of duration-selective mechanisms in duration encoding. Here, we investigated whether the encoding of duration - by duration-selective mechanisms - occurs early on in the visual processing hierarchy. To this end, we investigated the spatial specificity of the duration after-effect in two experiments. We measured the duration after-effect at adapter-test distances ranging between 0 and 15° of visual angle and for within- and between-hemifield presentations. We replicated the duration after-effect: the test stimulus was perceived to have a longer duration following adaptation to a shorter duration, and a shorter duration following adaptation to a longer duration. Importantly, this duration after-effect occurred at all measured distances, with no evidence for a decrease in the magnitude of the after-effect at larger distances or across hemifields. This shows that adaptation to duration does not result from adaptation occurring early on in the visual processing hierarchy. Instead, it seems likely that duration information is a high-level stimulus property that is encoded later on in the visual processing hierarchy.
Publisher: Cold Spring Harbor Laboratory
Date: 17-04-2023
DOI: 10.1101/2023.04.17.537137
Abstract: Recent evidence suggests that perceptual and cognitive functions are codetermined by rhythmic bodily states. Prior investigations have focused on the cardiac and respiratory rhythms, both of which are also known to synchronise with locomotion – arguably our most common and natural of voluntary behaviours. Unlike the cardiorespiratory rhythms, walking is entirely under voluntary control, enabling a test of how natural and voluntary rhythmic action may affect sensory function. Here, we show that the speed and phase of human locomotion constrains sensorimotor performance. We used a continuous visuo-motor tracking task in a wireless, body-tracking virtual environment, and found that the accuracy and reaction time of continuous reaching movements were decreased at slower walking speeds, and rhythmically modulated according to the phases of the step-cycle. Decreased accuracy when walking at slow speeds suggests an advantage for interlimb coordination at normal walking speeds, in contrast to previous research on dual-task walking and reach-to-grasp movements. Phasic modulations of reach precision within the step-cycle also suggest that the upper limbs are affected by the ballistic demands of motor-preparation during natural locomotion. Together these results show that the natural phases of human locomotion impose constraints on sensory function and demonstrate the value of examining dynamic and natural behaviour in contrast to the traditional and static methods of psychological science.
Publisher: SAGE Publications
Date: 03-2022
Publisher: Elsevier BV
Date: 09-2017
DOI: 10.1016/J.VISRES.2017.05.012
Abstract: Human observers maintain a representation of the visual features of objects when they become occluded. This representation facilitates the interpretation of occluded events and allows us to quickly identify objects upon reappearing. Here we investigated whether visual features that change over time are also represented during occlusion. To answer this question we used an illusion from the time perception domain in which the perceived duration of an event increases as its temporal frequency content increases. In the first experiment we demonstrate temporal frequency induced modulation of duration both when the object remains visible as well as when it becomes temporarily occluded. Additionally, we demonstrate that time dilation for temporarily occluded objects cannot be explained by modulations of duration as a result of pre- and post-occlusion presentation of the object. In a second experiment, we corroborate this finding by demonstrating that modulation of the perceived duration of occluded events depends on the expected temporal frequency content of the object during occlusion. Together these results demonstrate that the dynamic properties of an object are represented during occlusion. We conclude that the representations of occluded objects contain a wide range of features derived from the period when the object was still visible, including information about both the static and dynamic properties of the object.
Publisher: Elsevier BV
Date: 2017
DOI: 10.1016/J.VISRES.2016.11.002
Abstract: Certain visual stimuli can have two possible interpretations. These perceptual interpretations may alternate stochastically, a phenomenon known as bistability. Some classes of bistable stimuli, including binocular rivalry, are sensitive to bias from input through other modalities, such as sound and touch. Here, we address the question whether bistable visual motion stimuli, known as plaids, are affected by vestibular input that is caused by self-motion. In Experiment 1, we show that a vestibular self-motion signal biases the interpretation of the bistable plaid, increasing or decreasing the likelihood of the plaid being perceived as globally coherent or transparently sliding depending on the relationship between self-motion and global visual motion directions. In Experiment 2, we find that when the vestibular direction is orthogonal to the visual direction, the vestibular self-motion signal also biases the direction of one-dimensional motion. This interaction suggests that the effect in Experiment 1 is due to the self-motion vector adding to the visual motion vectors. Together, this demonstrates that the perception of visual motion direction can be systematically affected by concurrent but uninformative and task-irrelevant vestibular input caused by self-motion.
Publisher: SAGE Publications
Date: 27-09-2012
Publisher: Elsevier BV
Date: 11-2013
DOI: 10.1016/J.NEUROIMAGE.2013.06.034
Abstract: In the motion aftereffect (MAE), adapting to a moving stimulus causes a subsequently presented stationary stimulus to appear to move in the opposite direction. Recently, the neural basis of the motion aftereffect has received considerable interest, and a number of brain areas have been implicated in the generation of the illusory motion. Here, we use functional magnetic resonance imaging in combination with multivariate pattern classification to directly compare the neural activity evoked during the observation of both real and illusory motions. We show that the perceived illusory motion is not encoded in the same way as real motion in the same direction. Instead, suppression of the adapted direction of motion results in a shift of the population response of motion sensitive neurons in area MT+, resulting in activation patterns that are in fact more similar to real motion in orthogonal, rather than opposite directions. Although robust motion selectivity was observed in visual areas V1, V2, V3, and V4, this MAE-specific modulation of the population response was only observed in area MT+. Implications for our understanding of the motion aftereffect, and models of motion perception in general, are discussed.
Publisher: Springer Science and Business Media LLC
Date: 06-02-2018
DOI: 10.1038/S41598-018-20850-Y
Abstract: The abundance of temporal information in our environment calls for the effective selection and utilization of temporal information that is relevant for our behavior. Here we investigated whether visual attention gates the selective encoding of relevant duration information when multiple sources of duration information are present. We probed the encoding of duration by using a duration-adaptation paradigm. Participants adapted to two concurrently presented streams of stimuli with different durations, while detecting oddballs in one of the streams. We measured the resulting duration after-effect (DAE) and found that the DAE reflects stronger relative adaptation to attended durations, compared to unattended durations. Additionally, we demonstrate that unattended durations do not contribute to the measured DAE. These results suggest that attention plays a crucial role in the selective encoding of duration: attended durations are encoded, while encoding of unattended durations is either weak or absent.
Publisher: Public Library of Science (PLoS)
Date: 04-03-2019
Publisher: Springer Science and Business Media LLC
Date: 10-05-2013
DOI: 10.1007/S00221-013-3543-Z
Abstract: Optic flow simulating self-motion through the environment can induce postural adjustments in observers. Some studies investigating this phenomenon have used optic flow patterns increasing in speed from center to periphery, whereas others used optic flow patterns with a constant speed. However, altering the speed gradient of an optic flow stimulus changes the perceived rigidity of such a stimulus. Optic flow stimuli that are perceived as rigid can be expected to provide a stronger sensation of self-motion than non-rigid optic flow, and this may well be reflected in the amount of postural sway. The current study, therefore, examined, by manipulating the speed gradient, to what extent the rigidity of an optic flow stimulus influences posture along the anterior-posterior axis. We used radial random dot expanding or contracting optic flow patterns with three different speed profiles (single-speed, linear speed gradient or quadratic speed gradient) that differentially induce the sensation of self-motion. Interestingly, most postural sway was observed for the non-rigid single-speed optic flow pattern, which contained the least self-motion information of the three profiles. Moreover, we found an anisotropy in that contracting optic flow produced more postural sway than expanding optic flow. In addition, the amount of postural sway increased with increasing stimulus speed, but for contracting optic flow only. Taken together, the results of the current study support the view that visual and sensorimotor systems appear to be tailored toward compensating for rigid optic flow stimulation.
Publisher: Elsevier BV
Date: 10-2016
DOI: 10.1016/J.VISRES.2016.08.002
Abstract: During binocular rivalry, perception alternates between two dissimilar images, presented dichoptically. Although binocular rivalry is thought to result from competition at a local level, neighboring image parts with similar features tend to be perceived together for longer durations than image parts with dissimilar features. This simultaneous dominance of two image parts is called grouping during rivalry. Previous studies have shown that this grouping depends on a shared eye-of-origin to a much larger extent than on image content, irrespective of the complexity of a static image. In the current study, we examine whether grouping of dynamic optic flow patterns is also primarily driven by monocular (eye-of-origin) information. In addition, we examine whether image parameters, such as optic flow direction, and partial versus full visibility of the optic flow pattern, affect grouping durations during rivalry. The results show that grouping of optic flow is, as is known for static images, primarily affected by its eye-of-origin. Furthermore, global motion can affect grouping durations, but only under specific conditions. Namely, only when the two full optic flow patterns were presented locally. These results suggest that grouping during rivalry is primarily driven by monocular information even for motion stimuli thought to rely on higher-level motion areas.
Publisher: SAGE Publications
Date: 29-11-2017
Abstract: Several models of selection in search predict that saccades are biased toward conspicuous objects (also referred to as salient objects). Indeed, it has been demonstrated that initial saccades are biased toward the most conspicuous candidate. However, in a recent study, no such bias was found for the second saccade, and it was concluded that the attraction of conspicuous elements is limited to only short-latency initial saccades. This conclusion is based on only a single feature manipulation (orientation contrast) and conflicts with the prediction of influential salience models. Here, we investigate whether this result can be generalized beyond the domain of orientation. In displays containing three luminance annuli (Experiment 1), we find a considerable bias toward the most conspicuous candidate for the second saccade. In Experiment 1, the target could not be discriminated peripherally. When we made the target peripherally discriminable, the second saccade was no longer biased toward the more conspicuous candidate (Experiment 2). Thus, conspicuity plays a role in saccadic selection beyond the initial saccade. Whether second saccades are biased toward conspicuous objects appears to depend on the type of feature contrast underlying the conspicuity and the peripheral discriminability of target properties.
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 21-03-2019
DOI: 10.1167/19.3.4
Abstract: It is known that moving visual stimuli are perceived to last longer than stationary stimuli with the same physical duration (Kanai, Paffen, Hogendoorn, & Verstraten, 2006), and that motor actions (Tomassini & Morrone, 2016) and eye movements (Morrone, Ross, & Burr, 2005) can alter perceived duration. In the present work, we investigated the contributions of stimulus motion and self-motion to perceived duration while observers stood or walked in a virtual reality environment. Using a visual temporal reproduction task, we independently manipulated both the participants' motion (stationary or walking) and the stimulus motion (retinal stationary, real-world stationary and negative double velocity). When the observers were standing still, drifting gratings were perceived as lasting longer than duration-matched static gratings. Interestingly, we did not see any time distortion when observers were walking, neither when the gratings were kept stationary relative to the observer's point of view (i.e., no retinal motion) nor when they were stationary in the external world (i.e., producing the same retinal velocity as the walking condition with stationary grating). Self-motion caused significant dilation in perceived duration only when the gratings were moving at double speed, opposite to the observers' walking direction. Consistent with previous work (Fornaciai, Arrighi, & Burr, 2016), this suggests that the system is able to suppress self-generated motion to enhance external motion, which would have ecological benefits, for ex le, for threat detection while navigating through the environment.
Publisher: SAGE Publications
Date: 10-07-2016
Abstract: The mechanisms held responsible for familiar face recognition are thought to be orientation dependent inverted faces are more difficult to recognize than their upright counterparts. Although this effect of inversion has been investigated extensively, researchers have typically sliced faces from photographs and presented them in isolation. As such, it is not known whether the perceived orientation of a face is inherited from the visual scene in which it appears. Here, we address this question by measuring performance in a simultaneous same–different task while manipulating both the orientation of the faces and the scene. We found that the face inversion effect survived scene inversion. Nonetheless, an improvement in performance when the scene was upside down suggests that sensitivity to identity increased when the faces were more easily segmented from the scene. Thus, while these data identify congruency with the visual environment as a contributing factor in recognition performance, they imply different mechanisms operate on upright and inverted faces.
Publisher: SAGE Publications
Date: 11-05-2022
Publisher: Public Library of Science (PLoS)
Date: 02-07-2014
Publisher: SAGE Publications
Date: 30-09-2015
Abstract: In his original contribution, Exner’s principal concern was a comparison between the properties of different aftereffects, and particularly to determine whether aftereffects of motion were similar to those of color and whether they could be encompassed within a unified physiological framework. Despite the fact that he was unable to answer his main question, there are some excellent—so far unknown—contributions in Exner’s paper. For ex le, he describes observations that can be related to binocular interaction, not only in motion aftereffects but also in rivalry. To the best of our knowledge, Exner provides the first description of binocular rivalry induced by differently moving patterns in each eye, for motion as well as for their aftereffects. Moreover, apart from several known, but beautifully addressed, phenomena he makes a clear distinction between motion in depth based on stimulus properties and motion in depth based on the interpretation of motion. That is, the experience of movement, as distinct from the perception of movement. The experience, unlike the perception, did not result in a motion aftereffect in depth.
Publisher: SAGE Publications
Date: 2017
Abstract: The speed and ease with which we recognize the faces of our friends and family members belies the difficulty we have recognizing less familiar in iduals. Nonetheless, overconfidence in our ability to recognize faces has carried over into various aspects of our legal system for instance, eyewitness identification serves a critical role in criminal proceedings. For this reason, understanding the perceptual and psychological processes that underlie false identification is of the utmost importance. Gaze direction is a salient social signal and direct eye contact, in particular, is thought to capture attention. Here, we tested the hypothesis that differences in gaze direction may influence difficult decisions in a lineup context. In a series of experiments, we show that when a group of faces differed in their gaze direction, the faces that were making eye contact with the participants were more likely to be misidentified. Interestingly, this bias disappeared when the faces are presented with their eyes closed. These findings open a critical conversation between social neuroscience and forensic psychology, and imply that direct eye contact may (wrongly) increase the perceived familiarity of a face.
Publisher: Springer Science and Business Media LLC
Date: 16-09-2015
Publisher: Elsevier BV
Date: 08-2015
DOI: 10.1016/J.VISRES.2015.05.005
Abstract: Several visual illusions demonstrate that the neural processing of visual position can be affected by visual motion. Well-known ex les are the flash-lag, flash-drag, and flash-jump effect. However, where and when in the visual processing hierarchy such interactions take place is unclear. Here, we used a variant of the flash-grab illusion (Vision Research 91 (2013), pp. 8-20) to shift the perceived positions of flashed stimuli, and applied multivariate pattern classification to in idual 64-channel EEG trials to dissociate neural signals corresponding to veridical versus perceived position with high temporal resolution. We show illusory effects of motion on perceived position in three separate analyses: (1) A classifier can distinguish different perceived positions of a flashed object, even when the veridical positions are identical. (2) When the perceived positions of two objects presented in different locations become more similar, the classifier performs less well than when they become more different, even if the veridical positions remain unchanged. (3) Finally, a classifier can discriminate the perceived position of an object even when trained on objects presented in physically different positions. These effects are evident as early as 81ms post-stimulus, concurrent with the very first EEG signals indicating that any stimulus is present at all. This finding shows that the illusion must begin at an early level, probably as part of a predominantly feed-forward mechanism, leaving the influence of any recurrent processes to later stages in the development of the effect.
Publisher: SAGE Publications
Date: 18-07-2014
No related grants have been discovered for Frans Verstraten.