ORCID Profile
0000-0001-9026-1199
Current Organisations
University of Nottingham
,
University College London
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
In Research Link Australia (RLA), "Research Topics" refer to ANZSRC FOR and SEO codes. These topics are either sourced from ANZSRC FOR and SEO codes listed in researchers' related grants or generated by a large language model (LLM) based on their publications.
Psychology | Sensory Processes, Perception and Performance | Sensory Processes, Perception And Performance | Detection And Prevention Of Crime; Security Services | Sensory Systems | Evidence And Procedure
Behavioural and cognitive sciences | Expanding Knowledge in Psychology and Cognitive Sciences | Law enforcement | Hearing, vision, speech and their disorders |
Publisher: Elsevier BV
Date: 08-2003
DOI: 10.1016/S0042-6989(03)00281-5
Abstract: The tendency for briefly flashed stimuli to appear to lag behind the spatial position of physically aligned moving stimuli is known as the flash-lag effect. Possibly the simplest explanation for this phenomenon is that transient stimuli are processed more slowly than moving stimuli. We tested this proposal using a task based upon the simultaneous tilt illusion. When an oriented stimulus is surrounded by another oriented stimulus, the inner stimulus can appear to be rotated away from the orientation of the surround. By flashing central static sinewave gratings at specific phases of an annular gratings rotation cycle, we were able to determine the temporal dependence of the tilt illusion. Our results suggest a small, approximately 20 ms, processing advantage for the rotating stimulus relative to the flashed stimulus. Such a small advantage, if due to differential latencies, is insufficient to account for the flash-lag effect.
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 06-2011
DOI: 10.1167/11.7.1
Abstract: Detection and identification of objects are the most crucial goals of visual perception. We studied the role of luminance and chromatic information for object processing by comparing performance of familiar, meaningful object contours with those of novel, non-object contours. Comparisons were made between full-color and reduced-color object (or non-object) contours. Full-color stimuli contained both chromatic and luminance information, whereas luminance information was absent in the reduced-color stimuli. All stimuli were made equally salient by fixing them at multiples of discrimination threshold contrast. In a subsequent electroencephalographic experiment observers were asked to classify contours as objects or non-objects. An advantage in accuracy was found for full-color stimuli over the reduced-color stimuli but only if the contours depicted objects as opposed to non-objects. Event-related potentials revealed the neural correlate of this object-specific luminance advantage. The litude of the centro-occipital N1 component was modulated by stimulus class with the effect being driven by the presence of luminance information. We conclude that high-level discrimination processes in the cortex start relatively early and exhibit object-selective effects only in the presence of luminance information. This is consistent with the superiority of luminance in subserving object identification processes.
Publisher: Cold Spring Harbor Laboratory
Date: 13-09-2023
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 06-10-2011
DOI: 10.1167/11.12.1
Abstract: In primates, inspection of a visual scene is typically interrupted by frequent gaze shifts, occurring at an average rate of three to five times per second. Perceptually, these gaze shifts are accompanied by a compression of visual space toward the saccade target, which may be attributed to an oculomotor signal that transiently influences visual processing. While previous studies of compression have focused exclusively on saccadic eye movements made with the head artificially immobilized, many brain structures involved in saccade generation also encode combined eye-head gaze shifts. Thus, in order to understand the interaction between gaze motor and visual signals, we studied perception during eye-head gaze shifts and found a powerful compression of visual space that was spatially directed toward the intended gaze (and not the eye movement) target location. This perceptual compression was nearly constant in duration across gaze shift litudes, suggesting that the signal that triggers compression is largely independent of the size and kinematics of the gaze shift. The spatial pattern of results could be captured by a model that involves interactions, on a logarithmic map of visual space, between two loci of neural activity that encode the gaze shift vector and visual stimulus position relative to the fovea.
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 12-10-2022
DOI: 10.1167/JOV.22.11.7
Publisher: The Royal Society
Date: 21-12-2005
Abstract: We examined whether the detection of audio–visual temporal synchrony is determined by a pre-attentive parallel process, or by an attentive serial process using a visual search paradigm. We found that detection of a visual target that changed in synchrony with an auditory stimulus was gradually impaired as the number of unsynchronized visual distractors increased (experiment 1), whereas synchrony discrimination of an attended target in a pre-cued location was unaffected by the presence of distractors (experiment 2). The effect of distractors cannot be ascribed to reduced target visibility nor can the increase in false alarm rates be predicted by a noisy parallel processing model. Reaction times for target detection increased linearly with number of distractors, with the slope being about twice as steep for target-absent trials as for target-present trials (experiment 3). Similar results were obtained regardless of whether the audio–visual stimulus consisted of visual flashes synchronized with litude-modulated pips, or of visual rotations synchronized with frequency-modulated up–down sweeps. All of the results indicate that audio–visual perceptual synchrony is judged by a serial process and are consistent with the suggestion that audio–visual temporal synchrony is detected by a ‘mid-level’ feature matching process.
Publisher: Elsevier BV
Date: 05-2017
DOI: 10.1016/J.VISRES.2016.02.004
Abstract: Visual analyses of movement are disproportionately reliant on luminance contrast, as opposed to colour differences. One consequence is that if a moving pattern is defined solely by changes in colour (is equiluminant), people can report having no sensation of movement, despite still being able to 'see' the pattern. This is called motion standstill. To date there have been no formal reports of foveal motion standstill. Here we investigate whether this is because the conditions necessary for inducing motion standstill are particular to peripheral vision and therefore absent at the fovea. We used pre-adaptation to luminance-defined motion to encourage motion standstill of equiluminant inputs (see Willis & Anderson, 1998). We found that this could be successful for both peripheral and foveal inputs. Our data thus show that the sensation of colour-defined movement can be similarly degraded by pre-adaptation to luminance-defined motion at both the fovea and in peripheral vision.
Publisher: Springer Science and Business Media LLC
Date: 09-2003
DOI: 10.1038/NATURE01955
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 23-03-2018
DOI: 10.1167/18.3.12
Publisher: Springer Science and Business Media LLC
Date: 05-2009
DOI: 10.3758/APP.71.4.757
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 31-08-2007
DOI: 10.1167/7.11.14
Publisher: Frontiers Media SA
Date: 2010
Publisher: Public Library of Science (PLoS)
Date: 05-02-2018
Publisher: Elsevier BV
Date: 05-2011
Publisher: Springer Science and Business Media LLC
Date: 24-01-2022
DOI: 10.1038/S41598-022-05289-6
Abstract: One of the seminal findings of cognitive neuroscience is that the power of occipital alpha-band (~ 10 Hz) brain waves is increased when peoples’ eyes are closed, rather than open. This has encouraged the view that alpha oscillations are a default dynamic, to which the visual brain returns in the absence of input. Accordingly, we might be unable to increase the power of alpha oscillations when the eyes are closed, above the level that would normally ensue when people close their eyes. Here we report counter evidence. We used electroencephalography (EEG) to record brain activity when people had their eyes open and closed, both before and after they had adapted to radial motion. The increase in alpha power when people closed their eyes was increased by prior adaptation to a broad range of radial motion speeds. This effect was greatest for 10 Hz motion, but robust for other frequencies (and especially 7.5 Hz). This discredits a persistent entrainment of activity at the adaptation frequency as an explanation for our findings. Our data show that the power of occipital alpha-band brain waves can be increased by motion sensitive visual processes that persist when the eyes are closed. Consequently, we suggest that the power of these brain waves is, at least in part, an index of the degree to which visual brain activity is being subjected to inhibition. This is increased when people close their eyes, but can be even further increased by pre-adaptation to radial motion.
Publisher: Cold Spring Harbor Laboratory
Date: 18-04-2023
DOI: 10.1101/2023.04.18.537334
Abstract: The ability of humans to identify and reproduce short time intervals (in the region of a second) may be affected by many factors ranging from the gender of the in idual observer, through the attentional state, to the precise spatiotemporal structure of the stimulus. The relative roles of these very different factors are a challenge to describe and define several methodological approaches have been used to achieve this to varying degrees of success. Here we describe a new paradigm affording not only a first-order measurement of the perceived duration of an interval but also a second-order metacognitive judgement of perceived time. This approach, we argue, expands the form of the data generally collected in duration-judgements and allows more detailed comparison of psychophysical behaviour to the underlying theory. We also describe a measurement model which provides estimates of the variability of the temporal estimates and the metacognitive judgments allowing comparison to an ideal observer. We fit the model to data collected for judgements of 750ms (bisecting 1500ms) and 1500ms (bisecting 3000ms) intervals across three stimulus modalities (Visual, Audio & Audiovisual). This enhanced form of data on a given interval judgement and the ability to track its progression on a trial-by-trial basis offers a way of looking at the different roles that subject-based, task-based and stimulus-based factors have on the perception of time.
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 16-03-2010
DOI: 10.1167/3.9.190
Publisher: Cold Spring Harbor Laboratory
Date: 08-11-2022
DOI: 10.1101/2022.11.07.515537
Abstract: Signal-detection theory (SDT) is one of the most popular frameworks for analyzing data from studies of human behavior – including investigations of confidence. SDT-based analyses of confidence deliver both standard estimates of sensitivity (d’), and a second estimate based only on high-confidence decisions – meta d’. The extent to which meta d’ estimates fall short of d’ estimates is regarded as a measure of metacognitive inefficiency, quantifying the contamination of confidence by additional noise. These analyses rely on a key but questionable assumption – that repeated exposures to an input will evoke a normally-shaped distribution of perceptual experiences (the normality assumption). Here we show, via analyses inspired by an experiment and modelling, that when distributions of experiences do not conform with the normality assumption, meta d’ can be systematically underestimated relative to d’. Our data therefore highlight that SDT-based analyses of confidence do not provide a ground truth measure of human metacognitive inefficiency. Signal-detection theory is one of the most popular frameworks for analysing data from experiments of human behaviour – including investigations of confidence. The authors show that the results of these analyses cannot be regarded as ground truth. If a key assumption of the framework is inadvertently violated, analyses can encourage conceptually flawed conclusions.
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 15-03-2010
DOI: 10.1167/2.7.555
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 28-09-2016
DOI: 10.1167/16.11.23
Abstract: The synchronous change of a feature across multiple discrete elements, i.e., temporal synchrony, has been shown to be a powerful cue for grouping and segmentation. This has been demonstrated with both static and dynamic stimuli for a range of tasks. However, in addition to temporal synchrony, stimuli in previous research have included other cues which can also facilitate grouping and segmentation, such as good continuation and coherent spatial configuration. To evaluate the effectiveness of temporal synchrony for grouping and segmentation in isolation, here we measure signal detection thresholds using a global-Gabor stimulus in the presence/absence of a synchronous event. We also examine the impact of the spatial proximity of the to-be-grouped elements on the effectiveness of temporal synchrony, and the duration for which elements are bound together following a synchronous event in the absence of further segmentation cues. The results show that temporal synchrony (in isolation) is an effective cue for grouping local elements together to extract a global signal. Further, we find that the effectiveness of temporal synchrony as a cue for segmentation is modulated by the spatial proximity of signal elements. Finally, we demonstrate that following a synchronous event, elements are perceptually bound together for an average duration of 200 ms.
Publisher: Springer Science and Business Media LLC
Date: 26-03-2019
DOI: 10.1038/S41598-018-37888-7
Abstract: Information from different sensory modalities can interact, shaping what we think we have seen, heard, or otherwise perceived. Such interactions can enhance the precision of perceptual decisions, relative to those based on information from a single sensory modality. Several computational processes could account for such improvements. Slight improvements could arise if decisions are based on multiple independent sensory estimates, as opposed to just one. Still greater improvements could arise if initially independent estimates are summed to form a single integrated code. This hypothetical process has often been described as optimal when it results in bimodal performance consistent with a summation of unimodal estimates weighted in proportion to the precision of each initially independent sensory code. Here we examine cross-modal cue combination for audio-visual temporal rate and spatial location cues. While suggestive of a cross-modal encoding advantage, the degree of facilitation falls short of that predicted by a precision weighted summation process. These data accord with other published observations, and suggest that precision weighted combination is not a general property of human cross-modal perception.
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 12-2009
DOI: 10.1167/9.13.1
Publisher: Elsevier BV
Date: 08-2007
Publisher: Cold Spring Harbor Laboratory
Date: 24-02-2020
DOI: 10.1101/2020.02.20.958900
Abstract: One of the seminal findings of cognitive neuroscience is that the power of alpha-band (∼10 Hz) brain waves, in occipital regions, increases when people close their eyes. This has encouraged the view that alpha oscillations are a default dynamic, to which the visual brain returns in the absence of input. Accordingly, we might be unable to increase the power of alpha oscillations when the eyes are closed, above the level that would usually ensue. Here we report counter evidence. We used electroencephalography (EEG) to record brain activity when people had their eyes open and closed, before and after they had adapted to radial motion. The increase in the power of alpha oscillations when people closed their eyes was enhanced by adaptation to a broad range of radial motion speeds. This effect was greatest for 10Hz motion, but robust for other frequencies, and specifically for 7.5Hz. This last observation is important, as it rules against an ongoing entrainment of activity, at the adaptation frequency, as an explanation for our results. Instead, our data show that visual processes remain active when people close their eyes, and these can be modulated by adaptation to increase the power of alpha oscillations in occipital brain regions.
Publisher: The Royal Society
Date: 04-08-2021
Abstract: Humans experience levels of confidence in perceptual decisions that tend to scale with the precision of their judgements but not always. Sometimes precision can be held constant while confidence changes—leading researchers to assume precision and confidence are shaped by different types of information (e.g. perceptual and decisional). To assess this, we examined how visual adaptation to oriented inputs changes tilt perception, perceptual sensitivity and confidence. Some adaptors had a greater detrimental impact on measures of confidence than on precision. We could account for this using an observer model, where precision and confidence rely on different magnitudes of sensory information. These data show that differences in perceptual sensitivity and confidence can therefore emerge, not because these factors rely on different types of information, but because they rely on different magnitudes of sensory information.
Publisher: Elsevier BV
Date: 11-2005
DOI: 10.1016/J.VISRES.2005.04.020
Abstract: When a moving border defined by small changes in luminance (or by differences in colour) is placed in close proximity to moving borders defined by large changes in luminance, the low contrast border can appear to jitter. Previously, the existence and characteristics of this phenomenon were established using subjective reports. Here, we show that spatial judgments become more difficult in the presence of illusory jitter, presumably because of the positional uncertainty that is induced. We also explore the influence of the distance between the different types of moving border. We find that this manipulation influences the salience and litude, but not the perceived rate, of illusory jitter. Finally, we show that illusory jitter remains when the different types of moving border are presented to different eyes. These observations suggest that this phenomenon arises at the cortical level and are consistent with our earlier proposal--that illusory jitter can occur because the visual system periodically resolves a spatial conflict that arises when a rigid moving object contains different apparent speeds.
Publisher: Frontiers Media SA
Date: 2011
Publisher: Cold Spring Harbor Laboratory
Date: 20-02-2020
DOI: 10.1101/2020.02.19.951566
Abstract: Prediction is considered a core function of the human visual brain, but relating this suggestion to real life is problematic, as findings regarding the neural correlates of prediction rely on abstracted experiments, not reminiscent of a typical visual diet. We addressed this by having people view videos of basketball, and asking them to predict jump shot outcomes while we recorded eye movements and brain activity. We used the brain’s understanding of physics to manipulate predictive success, by inverting footage. People had enhanced alpha-band activity in occipital brain regions when watching upright videos, and this predicted both an increase in predictive success and enhanced ball tracking. Alpha-band activity in visual brain regions has been linked to inhibition, so we regard our results as evidence that inhibition of task irrelevant information is a core function of predictive processes in the visual brain, enacted as people complete visual tasks typical of daily life.
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 08-2008
DOI: 10.1167/8.10.3
Publisher: Optica Publishing Group
Date: 09-2001
Abstract: We examined the effect of changing the composition of the carrier on the perception of motion in a drifting contrast envelope. Human observers were required to discriminate the direction of motion of contrast modulations of an underlying carrier as a function of temporal frequency and scaled (carrier) contrast. The carriers were modulations of both color and luminance, defined within a cardinal color space. Random-noise carriers had either binary luminance profiles or flat (gray-scale-white) or 1/f (pink) spectral power functions. Independent variables investigated were the envelope spatial frequency and temporal-drift frequency and the fundamental spatial frequency, color, and temporal-update frequency of the carrier. The results show that observers were able to discriminate correctly the direction of envelope motion for binary-noise carriers at both high (16 Hz) and low (2 Hz) temporal-drift frequencies. Changing the carrier format from binary noise to a flat (gray-scale) or 1/f litude profile reduced discrimination performance slightly but only in the high-temporal-frequency condition. Manipulation of the fundamental frequency of the carrier elicited no change in performance at the low temporal frequencies but produced ambiguous or reversed motion at the higher temporal frequencies as soon as the fundamental frequency was higher than the envelope modulation frequency. We found that envelope motion detection was sensitive to the structure of the carrier.
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 04-2011
DOI: 10.1167/11.4.1
Abstract: It has long been known that an outward mask is much more disruptive than an inward mask in crowding (H. Bouma, 1973). We show that the locus of attention strongly affects this inward-outward anisotropy, removing it in some conditions and reversing it in others. In a 2AFC paradigm, subjects identified whether a high-contrast Gabor target of a given orientation was presented left or right of fixation. When a fixed eccentricity (8°) was used, the outward plaid mask produced much stronger crowding than the inward mask. When 7°, 8°, and 9° eccentricities were interleaved within the same run, diffusing attention, the inward and outward masks produced the same amount of crowding for all three eccentricities. When target identification was contingent on a foveal cue, biasing attention inward, the inward mask produced stronger crowding. Finally, a new contrast-detection paradigm was used to demonstrate that attention is generally mislocalized outward of the target, which may explain the commonly observed anisotropy in crowding. Our results suggest that spatial attention is intimately involved in the mechanism of crowding.
Publisher: Elsevier BV
Date: 05-2005
DOI: 10.1016/J.VISRES.2004.11.014
Abstract: It has been proposed that there is a perceptual compensation for the difference between the speeds of light and sound. We examined this possibility using a range of auditory-visual tasks, in which performance depends on the relative timing of auditory and visual information, and manipulated viewing distance to test for perceptual compensation. We explored auditory-visual integration, cross modal causal attributions, and auditory-visual temporal order judgments. We observed timing shifts with viewing distance following loudspeaker, but not headphone, presentations. We were unable to find reliable evidence of perceptual compensation. Our findings suggest that auditory and visual signals of an event that reach an observer at the same point in time tend to become perceptually bound, even when the sources of those signals could not have occurred together.
Publisher: Springer Science and Business Media LLC
Date: 09-08-2021
DOI: 10.1038/S41598-021-95295-X
Abstract: Prediction is a core function of the human visual system. Contemporary research suggests the brain builds predictive internal models of the world to facilitate interactions with our dynamic environment. Here, we wanted to examine the behavioural and neurological consequences of disrupting a core property of peoples’ internal models, using naturalistic stimuli. We had people view videos of basketball and asked them to track the moving ball and predict jump shot outcomes, all while we recorded eye movements and brain activity. To disrupt people’s predictive internal models, we inverted footage on half the trials, so dynamics were inconsistent with how movements should be shaped by gravity. When viewing upright videos people were better at predicting shot outcomes, at tracking the ball position, and they had enhanced alpha-band oscillatory activity in occipital brain regions. The advantage for predicting upright shot outcomes scaled with improvements in ball tracking and occipital alpha-band activity. Occipital alpha-band activity has been linked to selective attention and spatially-mapped inhibitions of visual brain activity. We propose that when people have a more accurate predictive model of the environment, they can more easily parse what is relevant, allowing them to better target irrelevant positions for suppression—resulting in both better predictive performance and in neural markers of inhibited information processing.
Publisher: Elsevier BV
Date: 08-2016
DOI: 10.1016/J.VISRES.2016.04.005
Abstract: Ballistic eye movements, or saccades, present a major challenge to the visual system. They generate a rapid blur of movement across the surface of the retinae that is rarely consciously seen, as awareness of input is suppressed around the time of a saccade. Saccades are also associated with a number of perceptual distortions. Here we are primarily interested in a saccade-induced illusory reversal of apparent temporal order. We examine the apparent order of transient targets presented around the time of saccades. In agreement with previous reports, we find evidence for an illusory reversal of apparent temporal order when the second of two targets is presented during a saccade - but this is only apparent for some observers. This contrasts with the apparent salience of targets presented during a saccade, which is suppressed for all observers. Our data suggest that separable processes might underlie saccadic suppressions of salience and saccade-induced reversals of apparent order. We suggest the latter arises when neural transients, normally used for timing judgments, are suppressed due to a saccade - but that this is an insufficient pre-condition. We therefore make the further suggestion, that the loss of a neural transient must be coupled with a specific inferential strategy, whereby some people assume that when they lack a clear impression of event timing, that event must have happened less recently than alternate events for which they have a clear impression of timing.
Publisher: Elsevier BV
Date: 03-2006
DOI: 10.1016/J.CUB.2006.01.032
Abstract: A fundamental question about the perception of time is whether the neural mechanisms underlying temporal judgements are universal and centralized in the brain or modality specific and distributed. Time perception has traditionally been thought to be entirely dissociated from spatial vision. Here we show that the apparent duration of a dynamic stimulus can be manipulated in a local region of visual space by adapting to oscillatory motion or flicker. This implicates spatially localized temporal mechanisms in duration perception. We do not see concomitant changes in the time of onset or offset of the test patterns, demonstrating a direct local effect on duration perception rather than an indirect effect on the time course of neural processing. The effects of adaptation on duration perception can also be dissociated from motion or flicker perception per se. Although 20 Hz adaptation reduces both the apparent temporal frequency and duration of a 10 Hz test stimulus, 5 Hz adaptation increases apparent temporal frequency but has little effect on duration perception. We conclude that there is a peripheral, spatially localized, essentially visual component involved in sensing the duration of visual events.
Publisher: Informa UK Limited
Date: 10-2005
Location: United Kingdom of Great Britain and Northern Ireland
Location: United Kingdom of Great Britain and Northern Ireland
Location: United Kingdom of Great Britain and Northern Ireland
Location: United Kingdom of Great Britain and Northern Ireland
Start Date: 06-2009
End Date: 12-2013
Amount: $160,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 04-2006
End Date: 12-2009
Amount: $320,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 04-2020
End Date: 12-2024
Amount: $365,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 02-2018
End Date: 11-2021
Amount: $199,412.00
Funder: Australian Research Council
View Funded Activity