ORCID Profile
0000-0003-1027-6222
Current Organisation
University of Queensland
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
In Research Link Australia (RLA), "Research Topics" refer to ANZSRC FOR and SEO codes. These topics are either sourced from ANZSRC FOR and SEO codes listed in researchers' related grants or generated by a large language model (LLM) based on their publications.
Psychology | Sensory Processes, Perception and Performance | Sensory Processes, Perception And Performance | Sensory Systems | Biological Psychology (Neuropsychology, Psychopharmacology, Physiological Psychology)
Expanding Knowledge in Psychology and Cognitive Sciences | Behavioural and cognitive sciences | Hearing, vision, speech and their disorders |
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 05-2009
DOI: 10.1167/9.5.4
Publisher: Springer Science and Business Media LLC
Date: 09-05-2019
DOI: 10.1038/S41598-019-43170-1
Abstract: Perceptual judgements are, by nature, a product both of sensation and the cognitive processes responsible for interpreting and reporting subjective experiences. Changed perceptual judgements may thus result from changes in how the world appears (perception), or subsequent interpretation (judgement). This ambiguity has led to persistent debates about how to interpret changes in decision-making, and if higher-order cognitions can change how the world looks, or sounds, or feels. Here we introduce an approach that can help resolve these ambiguities. In three motion-direction experiments, we measured perceptual judgements and subjective confidence. We show that each measure is sensitive to sensory information and can index sensory adaptation. Each measure is also sensitive to decision biases, but response bias impacts the central tendency of decision and confidence distributions differently. Our findings show that subjective confidence, when measured in addition to perceptual decisions, can supply important diagnostic information about the cause of aftereffects.
Publisher: Springer Science and Business Media LLC
Date: 06-07-2020
Publisher: Society for Neuroscience
Date: 04-06-2008
Publisher: Elsevier BV
Date: 08-2003
DOI: 10.1016/S0042-6989(03)00281-5
Abstract: The tendency for briefly flashed stimuli to appear to lag behind the spatial position of physically aligned moving stimuli is known as the flash-lag effect. Possibly the simplest explanation for this phenomenon is that transient stimuli are processed more slowly than moving stimuli. We tested this proposal using a task based upon the simultaneous tilt illusion. When an oriented stimulus is surrounded by another oriented stimulus, the inner stimulus can appear to be rotated away from the orientation of the surround. By flashing central static sinewave gratings at specific phases of an annular gratings rotation cycle, we were able to determine the temporal dependence of the tilt illusion. Our results suggest a small, approximately 20 ms, processing advantage for the rotating stimulus relative to the flashed stimulus. Such a small advantage, if due to differential latencies, is insufficient to account for the flash-lag effect.
Publisher: Elsevier BV
Date: 12-2015
DOI: 10.1016/J.COGPSYCH.2015.10.002
Abstract: Observers change their audio-visual timing judgements after exposure to asynchronous audiovisual signals. The mechanism underlying this temporal recalibration is currently debated. Three broad explanations have been suggested. According to the first, the time it takes for sensory signals to propagate through the brain has changed. The second explanation suggests that decisional criteria used to interpret signal timing have changed, but not time perception itself. A final possibility is that a population of neurones collectively encode relative times, and that exposure to a repeated timing relationship alters the balance of responses in this population. Here, we simplified each of these explanations to its core features in order to produce three corresponding six-parameter models, which generate contrasting patterns of predictions about how simultaneity judgements should vary across four adaptation conditions: No adaptation, synchronous adaptation, and auditory leading/lagging adaptation. We tested model predictions by fitting data from all four conditions simultaneously, in order to assess which model/explanation best described the complete pattern of results. The latency-shift and criterion-change models were better able to explain results for our s le as a whole. The population-code model did, however, account for improved performance following adaptation to a synchronous adapter, and best described the results of a subset of observers who reported least instances of synchrony.
Publisher: Elsevier BV
Date: 05-2022
DOI: 10.1016/J.COGNITION.2021.105012
Abstract: The brain-time account posits that the physical timing of sensory-evoked neural activity determines the perceived timing of corresponding sensory events. A canonical model formalises this account for tasks such as simultaneity and order judgements: Signals arrive at a decision centre in an order, and at a temporal offset, shaped by neural propagation times. This model assumes that the noise affecting people's temporal judgements is primarily neural-latency noise, i.e. variation in propagation times across trials, but this assumption has received little scrutiny. Here, we recorded EEG alongside simultaneity judgements from 50 participants in response to combinations of visual, auditory and tactile stimuli. Bootstrapping of ERP components was used to estimate neural-latency noise, and simultaneity judgements were modelled to estimate the precision of timing judgements. We obtained the predicted correlation between neural and behavioural measures of latency noise, supporting a fundamental feature of the canonical model of perceived timing.
Publisher: American Psychological Association (APA)
Date: 2016
DOI: 10.1037/XHP0000179
Abstract: Humans intuitively evaluate their decisions by forming different levels of confidence. Despite being highly correlated, decisional confidence and sensitivity can be differentiated. The computational processes underlying this remain unknown. Here we find that, for visual judgments concerning global direction, signal range has a greater impact on confidence than it does sensitivity. We equated sensitivity for stimuli containing different degrees of directional variability. This failed, however, to equate confidence-participants were less confident when judging more variable signals despite constant sensitivity. When stimuli were instead calibrated to equate confidence, participants were more sensitive when judging more variable signals. Directional range had no impact on an unrelated judgment of brightness, helping to establish that these results cannot be attributed to a simple decisional confound. Our complementary results show that directional sensitivity and decisional confidence rely on independent transformations of sensory input. We propose that confidence will generally be shaped by the range of differently tuned neural mechanisms responsive to input during evidence accumulation, with this having a lesser impact on sensitivity. (PsycINFO Database Record
Publisher: The Royal Society
Date: 07-09-2012
Abstract: Reliable estimates of time are essential for initiating interceptive actions at the right moment. However, our sense of time is surprisingly fallible. For instance, time perception can be distorted by prolonged exposure (adaptation) to movement. Here, we make use of this to determine if time perception and anticipatory actions rely on the same or on different temporal metrics. Consistent with previous reports, we find that the apparent duration of movement is mitigated by adaptation to more rapid motion, but is unchanged by adaptation to slower movement. By contrast, we find symmetrical effects of motion-adaptation on the timing of anticipatory interceptive actions, which are paralleled by changes in perceived speed for the adapted direction of motion. Our data thus reveal that anticipatory actions and perceived duration rely on different temporal metrics.
Publisher: American Psychological Association (APA)
Date: 06-2013
DOI: 10.1037/A0032240
Abstract: One of the oldest known visual aftereffects is the shape aftereffect, wherein looking at a particular shape can make subsequent shapes seem distorted in the opposite direction. After viewing a narrow ellipse, for ex le, a perfect circle can look like a broad ellipse. It is thought that shape aftereffects are determined by the dimensions of successive retinal images. However, perceived shape is invariant for large retinal image changes resulting from different viewing angles current understanding suggests that shape aftereffects should not be impacted by the operations responsible for this viewpoint invariance. By viewing adaptors from an angle, with subsequent frontoparallel tests, we establish that shape aftereffects are not solely determined by the dimensions of successive retinal images. Moreover, by comparing performance with and without stereo surface slant cues, we show that shape aftereffects reflect a weighted function of retinal image shape and surface slant information, a hallmark of shape constancy operations. Thus our data establish that shape aftereffects can be influenced by perceived shape, as determined by constancy operations, and must therefore involve higher-level neural substrates than previously thought.
Publisher: Springer Science and Business Media LLC
Date: 09-2003
DOI: 10.1038/NATURE01955
Publisher: American Psychological Association (APA)
Date: 2017
DOI: 10.1037/XHP0000292
Abstract: Adaptation to different visual properties can produce distinct patterns of perceptual aftereffect. Some, such as those following adaptation to color, seem to arise from recalibrative processes. These are associated with a reappraisal of which physical input constitutes a normative value in the environment-in this case, what appears "colorless," and what "colorful." Recalibrative aftereffects can arise from coding schemes in which inputs are referenced against malleable norm values. Other aftereffects seem to arise from contrastive processes. These exaggerate differences between the adaptor and other inputs without changing the adaptor's appearance. There has been conjecture over which process best describes adaptation-induced distortions of spatial vision, such as of apparent shape or facial identity. In 3 experiments, we determined whether recalibrative or contrastive processes underlie the shape aspect ratio aftereffect. We found that adapting to a moderately elongated shape compressed the appearance of narrower shapes and further elongated the appearance of more-elongated shapes (Experiment 1). Adaptation did not change the perceived aspect ratio of the adaptor itself (Experiment 2), and adapting to a circle induced similar bidirectional aftereffects on shapes narrower or wider than circular (Experiment 3). Results could not be explained by adaptation to retinotopically local edge orientation or single linear dimensions of shapes. We conclude that aspect ratio aftereffects are determined by contrastive processes that can exaggerate differences between successive inputs, inconsistent with a norm-referenced representation of aspect ratio. Adaptation might enhance the salience of novel stimuli rather than recalibrate one's sense of what constitutes a "normal" shape. (PsycINFO Database Record
Publisher: Elsevier BV
Date: 03-2003
Publisher: SAGE Publications
Date: 2013
DOI: 10.1068/P7420
Abstract: Human face recognition is disrupted by the reversal of luminance contrast polarity (ie photo negatives—see Galper 1970 Psychonomic Science19 207–208 Johnston et al 1992 Perception21 365–375), while recognition of other objects is less impacted (Nederhouser et al 2007 Vision Research47 2134–2142 Subramaniam and Biederman 1997 Investigative Ophthalmology & Visual Science38 998). This suggests that correct patterns of luminance contrast are important for facial coding. Here we investigate this further by minimising luminance contrast. We contrast peoples' ability to categorise cars and faces when images vary in luminance and when images are altered to predominantly contain differences in colour (equiluminance). Eliminating luminance contrast had a greater adverse impact on facial classifications relative to car categorisations. This was true even though precautions were taken to equate visibility, and despite equal levels of performance when images contained luminance contrast. These results were not due to images containing markedly different spectra, as the effect persisted for facial images altered to match car images in this regard, and performance in both tasks dropped off proportionally with increasing levels of image blur. Finally, consistent with previous observations, we show that facial coding is not only adversely impacted at equiluminance but becomes even worse when the polarity of luminance contrast is reversed. Our data show that the correct pattern of luminance contrast is very important for facial coding. We suggest that this is related to the role of luminance contrast in signalling 3-D shape from shading.
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 09-12-2016
DOI: 10.1167/16.15.9
Abstract: Binocular masking is a particularly interesting means of suppressing human visual awareness, as images rendered subjectively "invisible" via binocular masking nonetheless excite robust activity in human visual cortex. Recently, binocular masking has been leveraged to show that people can be trained to better interact with inputs that, subjectively, remain invisible. Here we ask what is learned in such circumstances. Do people become more adept at using weak encoded signals to guide hand movements, or is signal encoding enhanced, resulting in heightened objective sensitivity? To assess these possibilities, we had people train on five consecutive days, to reach toward and point at a target presented in one of three masked locations. Target intensity was set to a fraction of a detection threshold determined pretraining for each participant. We found that people became better at selecting the target location with training, even when insisting they could not see the target. More important, posttraining we found objective thresholds had improved by an amount that was commensurate with an improvement in subjective visibility. Our data therefore show that training to coordinate with subjectively invisible targets can result in enhanced encodings of binocularly masked images.
Publisher: American Psychological Association (APA)
Date: 2011
DOI: 10.1037/A0024235
Abstract: Cross-modal temporal recalibration describes a shift in the point of subjective simultaneity (PSS) between 2 events following repeated exposure to asynchronous cross-modal inputs--the adaptors. Previous research suggested that audiovisual recalibration is insensitive to the spatial relationship between the adaptors. Here we show that audiovisual recalibration can be driven by cross-modal spatial grouping. Twelve participants adapted to alternating trains of lights and tones. Spatial position was manipulated, with alternating sequences of a light then a tone, or a tone then a light, presented on either side of fixation (e.g., left tone--left light--right tone--right light, etc.). As the events were evenly spaced in time, in the absence of spatial-based grouping it would be unclear if tones were leading or lagging lights. However, any grouping of spatially colocalized cross-modal events would result in an unambiguous sense of temporal order. We found that adapting to these stimuli caused the PSS between subsequent lights and tones to shift toward the temporal relationship implied by spatial-based grouping. These data therefore show that temporal recalibration is facilitated by spatial grouping.
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 12-2009
DOI: 10.1167/9.13.1
Publisher: Cambridge University Press
Date: 25-03-2010
Publisher: Elsevier BV
Date: 08-2007
Publisher: American Psychological Association (APA)
Date: 2013
DOI: 10.1037/A0032534
Abstract: Deciding precisely when we have acted is challenging, as actions involve a train of neural events spread across both space and time. Repeated delays between actions and consequent events can result in a shift, such that immediate feedback can seem to precede the causative act. Here we examined which neurocognitive representations are affected during such sensorimotor temporal recalibration, by testing if the effect generalizes across limbs and whether it might reflect altered decision criteria for temporal judgments. Hand or foot adaptation phases were interspersed with simultaneity judgments about actions involving the same or opposite limb. Shifts in the distribution of participants' simultaneity responses were quantified using a detection-theoretic model, where a shift of both boundaries together gives a stronger indication that the effect is not simply a result of decision bias. By demonstrating that temporal recalibration occurs in the foot as well as the hand, we confirmed that it is a robust motor phenomenon: Both low and high boundaries shifted reliably in the same-limb conditions. However, in cross-limb conditions only the high boundary shifted reliably. These two patterns are interpreted to reflect a genuine change in how the time of action is represented, and a timing criterion shift, respectively.
Publisher: Elsevier BV
Date: 08-2013
DOI: 10.1016/J.VISRES.2013.05.009
Abstract: Many visual processes integrate information over protracted periods, a process known as temporal integration. One consequence of this is that objects that cast images that move across the retinal surfaces can generate blurred form signals, similar to the motion blur that can be captured in photographs taken with slow shutter speeds. Subjectively, retinal motion blur signals are suppressed from awareness, such that moving objects seem sharply defined. One suggestion has been that this subjective impression is due to humans not being able to distinguish between focussed and blurred moving objects. Contrary to this suggestion, here we report a novel illusion, and consequent experiments, that implicate a suppressive mechanism. We find that the apparent shape of circular moving objects can be distorted when their rear edges lag leading edges by ∼60 ms. Moreover, we find that sensitivity for detecting blur, and for discriminating between blur intensities, is uniformly worse for physical blurs added behind moving objects, as opposed to in-front. Also, it was easier to differentiate between slight and slightly greater physical blurs than it was to differentiate between slight blur and the absence of blur, both behind and in-front of moving edges. These 'dipper' functions suggest that blur signals must reach a threshold intensity before they can be detected, and that the relevant threshold is effectively elevated for blur signals trailing behind moving contours. In combination, these data suggest moving objects look sharply defined, at least in part, because of a functional adaptation that actively suppresses motion blur signals from awareness.
Publisher: Elsevier BV
Date: 11-2005
DOI: 10.1016/J.VISRES.2005.04.020
Abstract: When a moving border defined by small changes in luminance (or by differences in colour) is placed in close proximity to moving borders defined by large changes in luminance, the low contrast border can appear to jitter. Previously, the existence and characteristics of this phenomenon were established using subjective reports. Here, we show that spatial judgments become more difficult in the presence of illusory jitter, presumably because of the positional uncertainty that is induced. We also explore the influence of the distance between the different types of moving border. We find that this manipulation influences the salience and litude, but not the perceived rate, of illusory jitter. Finally, we show that illusory jitter remains when the different types of moving border are presented to different eyes. These observations suggest that this phenomenon arises at the cortical level and are consistent with our earlier proposal--that illusory jitter can occur because the visual system periodically resolves a spatial conflict that arises when a rigid moving object contains different apparent speeds.
Publisher: Cold Spring Harbor Laboratory
Date: 20-02-2020
DOI: 10.1101/2020.02.19.951566
Abstract: Prediction is considered a core function of the human visual brain, but relating this suggestion to real life is problematic, as findings regarding the neural correlates of prediction rely on abstracted experiments, not reminiscent of a typical visual diet. We addressed this by having people view videos of basketball, and asking them to predict jump shot outcomes while we recorded eye movements and brain activity. We used the brain’s understanding of physics to manipulate predictive success, by inverting footage. People had enhanced alpha-band activity in occipital brain regions when watching upright videos, and this predicted both an increase in predictive success and enhanced ball tracking. Alpha-band activity in visual brain regions has been linked to inhibition, so we regard our results as evidence that inhibition of task irrelevant information is a core function of predictive processes in the visual brain, enacted as people complete visual tasks typical of daily life.
Publisher: Elsevier BV
Date: 07-2012
DOI: 10.1016/J.VISRES.2012.04.020
Abstract: After prolonged exposure to a female face, faces that had previously seemed androgynous are more likely to be judged as male. Similarly, after prolonged exposure to a face with expanded features, faces that had previously seemed normal are more likely to be judged as having contracted features. These facial aftereffects have both been attributed to the impact of adaptation upon a norm-based opponent code, akin to low-level analyses of colour. While a good deal of evidence is consistent with this, some recent data is contradictory, motivating a more rigorous test. In behaviourally matched tasks we compared the characteristics of aftereffects generated by adapting to colour, to expanded or contracted faces, and to male or female faces. In our experiments opponent coding predicted that the appearance of the adapting image should change and that adaptation should induce symmetrical shifts of two category boundaries. This combination of predictions was firmly supported for colour adaptation, somewhat supported for facial distortion aftereffects, but not supported for facial gender aftereffects. Interestingly, the two face aftereffects we tested generated discrepant patterns of response shifts. Our data suggest that superficially similar aftereffects can ensue from mechanisms that differ qualitatively, and therefore that not all high-level categorical face aftereffects can be attributed to a common coding strategy.
Publisher: SAGE Publications
Date: 09-01-2013
Abstract: Many activities, such as driving or playing sports, require simultaneous monitoring of multiple, often moving, objects. Such situations tap people’s ability to attend selected objects without tracking them with their eyes—this is known as attentional tracking. It has been established that attentional tracking can be affected by the physical speed of a moving target. In the experiments reported here, we showed that this effect is primarily due to apparent speeds, as opposed to physical speeds. We used sensory adaptation—in this case, prolonged exposure to adapting stimuli moving faster or slower than standard test stimuli—to modulate perceived speed. We found performance decrements and increments for apparently sped and slowed test stimuli when participants attempted attentional tracking. Our data suggest that both perceived speed and the acuity of attention for moving objects reflect a ratio of responses in low-pass and band-pass temporal-frequency channels in human vision.
Publisher: American Psychological Association (APA)
Date: 2014
DOI: 10.1037/A0035362
Abstract: Illusory motion reversals (IMRs) can happen when looking at a repetitive pattern of motion, such as a spinning wheel. To date these have been attributed to either a form of motion aftereffect seen while viewing a moving stimulus or to the visual system taking discrete perceptual snapshots of continuous input. Here we present evidence that we argue is inconsistent with both proposals. First, we show that IMRs are driven by the adaptation of nondirectional temporal frequency tuned cells, which is inconsistent with the motion aftereffect account. Then we establish that the optimal frequency for inducing IMRs differs for color and luminance defined movement. These data are problematic for any account based on a constant rate of discrete perceptual s ling. Instead, we suggest IMRs result from a perceptual rivalry involving discrepant signals from a feature tracking analysis of movement and motion-energy based analyses. We do not assume that feature tracking relies on a discrete s ling of input at a fixed rate, but rather that feature tracking can (mis)match features at any rate less than a stimulus driven maximal resolution. Consistent with this proposal, we show that the critical frequency for inducing IMRs is dictated by the duty cycle of salient features within a moving pattern, rather than by the temporal frequency of luminance changes.
Publisher: Cold Spring Harbor Laboratory
Date: 31-12-2018
DOI: 10.1101/499764
Abstract: Albert Michotte (1946/1963) introduced causality into the realm of experimental phenomenology. He disputed Hume's (1739/1978) claim that the impression of causality comes only from conscious inference. Since then, causality adaptation studies have suggested that the visual perception of causality undergoes sensory adaptation, akin to the retinally-specific aftereffects of motion or orientation adaptation. Here we present 5 Experiments that, together, dispute a view of retinotopically-mapped neural populations dedicated to causality detection. We first point to key issues in previous studies of causality adaptation. Then we extend the basic causality adaptation paradigm to show that causality aftereffects occur in spatially global visual space. We directly compare causality aftereffects to the motion aftereffect to show important differences in their coordinate mapping. Our data point to a role for cognitive inferences as being an important aspect of causality aftereffects, despite causal impressions being tightly constrained by sensory perception.
Publisher: Elsevier BV
Date: 02-2013
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 29-07-2014
DOI: 10.1167/14.8.25
Abstract: Humans are experts at face recognition. The mechanisms underlying this complex capacity are not fully understood. Recently, it has been proposed that face recognition is supported by a coarse-scale analysis of visual information contained in horizontal bands of contrast distributed along the vertical image axis-a biological facial "barcode" (Dakin & Watt, 2009). A critical prediction of the facial barcode hypothesis is that the distribution of image contrast along the vertical axis will be more important for face recognition than image distributions along the horizontal axis. Using a novel paradigm involving dynamic image distortions, a series of experiments are presented examining famous face recognition impairments from selectively disrupting image distributions along the vertical or horizontal image axes. Results show that disrupting the image distribution along the vertical image axis is more disruptive for recognition than matched distortions along the horizontal axis. Consistent with the facial barcode hypothesis, these results suggest that human face recognition relies disproportionately on appropriately scaled distributions of image contrast along the vertical image axis.
Publisher: Frontiers Media SA
Date: 24-03-2016
Publisher: Public Library of Science (PLoS)
Date: 06-04-2011
Publisher: Springer Science and Business Media LLC
Date: 24-08-2021
DOI: 10.3758/S13414-021-02331-Z
Abstract: Viewing static images depicting movement can result in a motion aftereffect: people tend to categorise direction signals as moving in the opposite direction relative to the implied motion in still photographs. This finding could indicate that inferred motion direction can penetrate sensory processing and change perception. Equally possible, however, is that inferred motion changes decision processes, but not perception. Here we test these two possibilities. Since both categorical decisions and subjective confidence are informed by sensory information, confidence can be informative about whether an aftereffect probably results from changes to perceptual or decision processes. We therefore used subjective confidence as an additional measure of the implied motion aftereffect. In Experiment 1 (implied motion), we find support for decision-level changes only, with no change in subjective confidence. In Experiment 2 (real motion), we find equal changes to decisions and confidence. Our results suggest the implied motion aftereffect produces a bias in decision-making, but leaves perceptual processing unchanged.
Publisher: American Psychological Association (APA)
Date: 2012
DOI: 10.1037/A0028129
Abstract: Grapheme-color synesthesia is an atypical condition in which in iduals experience sensations of color when reading printed graphemes such as letters and digits. For some grapheme-color synesthetes, seeing a printed grapheme triggers a sensation of color, but hearing the name of a grapheme does not. This dissociation allowed us to compare the precision with which synesthetes are able to match their color experiences triggered by visible graphemes, with the precision of their matches for recalled colors based on the same graphemes spoken aloud. In six synesthetes, color matching for printed graphemes was equally variable relative to recalled experiences. In a control experiment, synesthetes and age-matched controls either matched the color of a circular patch while it was visible on a screen, or they judged its color from memory after it had disappeared. Both synesthetes and controls were more variable when matching from memory, and the variance of synesthetes' recalled color judgments matched that associated with their synesthetic judgments for visible graphemes in the first experiment. Results suggest that synesthetic experiences of color triggered by achromatic graphemes are analogous to recollections of color.
Publisher: Public Library of Science (PLoS)
Date: 08-09-2014
Publisher: SAGE Publications
Date: 2014
DOI: 10.1068/P7648
Abstract: Diverse forms of perceptual rivalry are claimed to tap a common causal mechanism. One of the bases for this claim is that the reported dynamics of binocular rivalry and motion-induced blindness are similar on an in idual basis (Carter & Pettigrew, 2003 Perception, 32, 295–305). We examined this relationship and found no evidence for a strong correlation. We therefore question the proposition that the dynamics of erse forms of rivalry are driven by a common mechanism.
Publisher: Cold Spring Harbor Laboratory
Date: 13-09-2023
Publisher: The Royal Society
Date: 21-12-2005
Abstract: We examined whether the detection of audio–visual temporal synchrony is determined by a pre-attentive parallel process, or by an attentive serial process using a visual search paradigm. We found that detection of a visual target that changed in synchrony with an auditory stimulus was gradually impaired as the number of unsynchronized visual distractors increased (experiment 1), whereas synchrony discrimination of an attended target in a pre-cued location was unaffected by the presence of distractors (experiment 2). The effect of distractors cannot be ascribed to reduced target visibility nor can the increase in false alarm rates be predicted by a noisy parallel processing model. Reaction times for target detection increased linearly with number of distractors, with the slope being about twice as steep for target-absent trials as for target-present trials (experiment 3). Similar results were obtained regardless of whether the audio–visual stimulus consisted of visual flashes synchronized with litude-modulated pips, or of visual rotations synchronized with frequency-modulated up–down sweeps. All of the results indicate that audio–visual perceptual synchrony is judged by a serial process and are consistent with the suggestion that audio–visual temporal synchrony is detected by a ‘mid-level’ feature matching process.
Publisher: Elsevier BV
Date: 05-2017
DOI: 10.1016/J.VISRES.2016.02.004
Abstract: Visual analyses of movement are disproportionately reliant on luminance contrast, as opposed to colour differences. One consequence is that if a moving pattern is defined solely by changes in colour (is equiluminant), people can report having no sensation of movement, despite still being able to 'see' the pattern. This is called motion standstill. To date there have been no formal reports of foveal motion standstill. Here we investigate whether this is because the conditions necessary for inducing motion standstill are particular to peripheral vision and therefore absent at the fovea. We used pre-adaptation to luminance-defined motion to encourage motion standstill of equiluminant inputs (see Willis & Anderson, 1998). We found that this could be successful for both peripheral and foveal inputs. Our data thus show that the sensation of colour-defined movement can be similarly degraded by pre-adaptation to luminance-defined motion at both the fovea and in peripheral vision.
Publisher: Elsevier BV
Date: 04-2012
DOI: 10.1016/J.CORTEX.2012.03.002
Abstract: Grapheme-colour synaesthesia is an atypical condition characterized by the perception of colours when reading achromatic text. We investigated the level of colour processing responsible for these experiences. To do so, we tapped a central characteristic of colour perception. In different lighting conditions the same wavelength of light can prompt the perception of different colours. This helps humans recognize distinctive coloured objects despite changes in illumination. We wanted to see if synaesthetic colours were generated at a neural locus that was susceptible to colour constancy analyses. We used colour matching and naming tasks to examine interactions between simulated coloured illuminants and synaesthetic colours. Neither synaesthetic colour matching or naming was impacted. This contrasted with non-synaesthetic control participants, who performed the colour-matching task with graphemes physically coloured to mimic synaesthesia. Our data suggest that synaesthetic colour signals are not generated at lower-levels of colour processing, but are introduced at higher levels of analysis and are therefore not impacted by the processes responsible for perceptual constancy.
Publisher: Elsevier BV
Date: 04-2001
DOI: 10.1016/S0960-9822(01)00156-7
Abstract: It has been demonstrated that subjects do not report changes in color and direction of motion as being co-incidental when they occur synchronously. Instead, for the changes to be reported as being synchronous, changes in direction of motion must precede changes in color. To explain this observation, some researchers have suggested that the neural processing of color and motion is asynchronous. This interpretation has been criticized on the basis that processing time may not correlate directly and invariantly with perceived time of occurrence. Here we examine this possibility by making use of the color-contingent motion aftereffect. By correlating color states disproportionately with two directions of motion, we produced and measured color-contingent motion aftereffects as a function of the range of physical correlations. The aftereffects observed are consistent with the perceptual correlation between color and motion being different from the physical correlation. These findings demonstrate asynchronous processing for different stimulus attributes, with color being processed more quickly than motion. This suggests that the time course of perceptual experience correlates directly with that of neural activity.
Publisher: American Psychological Association (APA)
Date: 2015
DOI: 10.1037/XHP0000034
Abstract: One of the primary functions of visual processing is to generate a spatial mapping of our immediate vicinity, in order to facilitate interaction. As yet it is unclear how this is achieved, but the process likely involves an accrual of information over time--a temporal integration of positional information (Eagleman & Sejnowski, 2000 Krekelberg et al., 2000). Temporal integration is a common computational process evident in erse settings, such as electrical engineering (Bryson & Ho, 1975) and neural coding (Rao, Eagleman & Sejnowski, 2001 Usher & McClelland, 2001). In the later context it is sometimes assumed that integration dynamics are immobile, and consequently that they can be diagnostic of a sensory system (Arnold & Lipp, 2011 Krauskopf & Mollon, 1971 Snowden & Braddick, 1991). Other data suggest that integration times can be flexible, varying in concert with the properties of a stimulus (Bair & Movshon, 2004) or environment (Ossmy et al., 2013). Our data provide behavioral support for malleable integration times. We examine a motion-induced illusion of perceived position linked to temporal integration, and use prolonged exposure to motion of different speeds (sensory adaptation) to modulate the dynamics of neural activity. Results show that perceived position is governed by a weighted average of positional estimates from multiple channels with distinct, fixed integration times. Postadaptation channel contributions are reweighted, resulting in coding that is optimized to the dynamics of the prevailing environment.
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 31-08-2007
DOI: 10.1167/7.11.14
Publisher: Elsevier BV
Date: 05-2014
Publisher: SAGE Publications
Date: 29-11-2017
Publisher: SAGE Publications
Date: 2011
DOI: 10.1068/P6955
Abstract: Judgments of upright faces tend to be more rapid than judgments of inverted faces. This is consistent with encoding at different rates via discrepant mechanisms, or via a common mechanism that is more sensitive to upright input. However, to the best of our knowledge no previous study of facial coding speed has tried to equate sensitivity across the characteristics under investigation (eg emotional expression, facial gender, or facial orientation). Consequently we cannot tell whether different decision speeds result from mechanisms that accrue information at different rates, or because facial images can differ in the amount of information they make available. To address this, we examined temporal integration times, the times across which information is accrued toward a perceptual decision. We examined facial gender and emotional expressions. We first identified image pairs that could be differentiated on 80% of trials with protracted presentations (1 s). We then presented these images at a range of brief durations to determine how rapidly performance plateaued, which is indicative of integration time. For upright faces gender was associated with a protracted integration relative to expression judgments. This difference was eliminated by inversion, with both gender and expression judgments associated with a common, rapid, integration time. Overall, our data suggest that upright facial gender and expression are encoded via distinct processes and that inversion does not just result in impaired sensitivity. Rather, inversion caused gender judgments, which had been associated with a protracted integration, to become associated with a more rapid process.
Publisher: Cold Spring Harbor Laboratory
Date: 08-11-2022
DOI: 10.1101/2022.11.07.515537
Abstract: Signal-detection theory (SDT) is one of the most popular frameworks for analyzing data from studies of human behavior – including investigations of confidence. SDT-based analyses of confidence deliver both standard estimates of sensitivity (d’), and a second estimate based only on high-confidence decisions – meta d’. The extent to which meta d’ estimates fall short of d’ estimates is regarded as a measure of metacognitive inefficiency, quantifying the contamination of confidence by additional noise. These analyses rely on a key but questionable assumption – that repeated exposures to an input will evoke a normally-shaped distribution of perceptual experiences (the normality assumption). Here we show, via analyses inspired by an experiment and modelling, that when distributions of experiences do not conform with the normality assumption, meta d’ can be systematically underestimated relative to d’. Our data therefore highlight that SDT-based analyses of confidence do not provide a ground truth measure of human metacognitive inefficiency. Signal-detection theory is one of the most popular frameworks for analysing data from experiments of human behaviour – including investigations of confidence. The authors show that the results of these analyses cannot be regarded as ground truth. If a key assumption of the framework is inadvertently violated, analyses can encourage conceptually flawed conclusions.
Publisher: Public Library of Science (PLoS)
Date: 16-04-2010
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 25-05-2007
DOI: 10.1167/7.7.7
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 11-2009
DOI: 10.1167/9.12.1
Publisher: SAGE Publications
Date: 22-10-2016
Abstract: Facial appearance can be altered, not just by restyling but also by sensory processes. Exposure to a female face can, for instance, make subsequent faces look more masculine than they would otherwise. Two explanations exist. According to one, exposure to a female face renormalizes face perception, making that female and all other faces look more masculine as a consequence—a unidirectional effect. According to that explanation, exposure to a male face would have the opposite unidirectional effect. Another suggestion is that face gender is subject to contrastive aftereffects. These should make some faces look more masculine than the adaptor and other faces more feminine—a bidirectional effect. Here, we show that face gender aftereffects are bidirectional, as predicted by the latter hypothesis. Images of real faces rated as more and less masculine than adaptors at baseline tended to look even more and less masculine than adaptors post adaptation. This suggests that, rather than mental representations of all faces being recalibrated to better reflect the prevailing statistics of the environment, mental operations exaggerate differences between successive faces, and this can impact facial gender perception.
Publisher: Cold Spring Harbor Laboratory
Date: 24-02-2020
DOI: 10.1101/2020.02.20.958900
Abstract: One of the seminal findings of cognitive neuroscience is that the power of alpha-band (∼10 Hz) brain waves, in occipital regions, increases when people close their eyes. This has encouraged the view that alpha oscillations are a default dynamic, to which the visual brain returns in the absence of input. Accordingly, we might be unable to increase the power of alpha oscillations when the eyes are closed, above the level that would usually ensue. Here we report counter evidence. We used electroencephalography (EEG) to record brain activity when people had their eyes open and closed, before and after they had adapted to radial motion. The increase in the power of alpha oscillations when people closed their eyes was enhanced by adaptation to a broad range of radial motion speeds. This effect was greatest for 10Hz motion, but robust for other frequencies, and specifically for 7.5Hz. This last observation is important, as it rules against an ongoing entrainment of activity, at the adaptation frequency, as an explanation for our results. Instead, our data show that visual processes remain active when people close their eyes, and these can be modulated by adaptation to increase the power of alpha oscillations in occipital brain regions.
Publisher: The Royal Society
Date: 04-08-2021
Abstract: Humans experience levels of confidence in perceptual decisions that tend to scale with the precision of their judgements but not always. Sometimes precision can be held constant while confidence changes—leading researchers to assume precision and confidence are shaped by different types of information (e.g. perceptual and decisional). To assess this, we examined how visual adaptation to oriented inputs changes tilt perception, perceptual sensitivity and confidence. Some adaptors had a greater detrimental impact on measures of confidence than on precision. We could account for this using an observer model, where precision and confidence rely on different magnitudes of sensory information. These data show that differences in perceptual sensitivity and confidence can therefore emerge, not because these factors rely on different types of information, but because they rely on different magnitudes of sensory information.
Publisher: Elsevier BV
Date: 05-2005
DOI: 10.1016/J.VISRES.2004.11.014
Abstract: It has been proposed that there is a perceptual compensation for the difference between the speeds of light and sound. We examined this possibility using a range of auditory-visual tasks, in which performance depends on the relative timing of auditory and visual information, and manipulated viewing distance to test for perceptual compensation. We explored auditory-visual integration, cross modal causal attributions, and auditory-visual temporal order judgments. We observed timing shifts with viewing distance following loudspeaker, but not headphone, presentations. We were unable to find reliable evidence of perceptual compensation. Our findings suggest that auditory and visual signals of an event that reach an observer at the same point in time tend to become perceptually bound, even when the sources of those signals could not have occurred together.
Publisher: Elsevier BV
Date: 08-2016
DOI: 10.1016/J.VISRES.2016.04.005
Abstract: Ballistic eye movements, or saccades, present a major challenge to the visual system. They generate a rapid blur of movement across the surface of the retinae that is rarely consciously seen, as awareness of input is suppressed around the time of a saccade. Saccades are also associated with a number of perceptual distortions. Here we are primarily interested in a saccade-induced illusory reversal of apparent temporal order. We examine the apparent order of transient targets presented around the time of saccades. In agreement with previous reports, we find evidence for an illusory reversal of apparent temporal order when the second of two targets is presented during a saccade - but this is only apparent for some observers. This contrasts with the apparent salience of targets presented during a saccade, which is suppressed for all observers. Our data suggest that separable processes might underlie saccadic suppressions of salience and saccade-induced reversals of apparent order. We suggest the latter arises when neural transients, normally used for timing judgments, are suppressed due to a saccade - but that this is an insufficient pre-condition. We therefore make the further suggestion, that the loss of a neural transient must be coupled with a specific inferential strategy, whereby some people assume that when they lack a clear impression of event timing, that event must have happened less recently than alternate events for which they have a clear impression of timing.
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 02-2011
DOI: 10.1167/11.2.1
Abstract: The ability to attend to multiple objects that move in the visual field is important for many aspects of daily functioning. The attentional capacity for such dynamic tracking, however, is highly limited and undergoes age-related decline. Several aspects of the tracking process can influence performance. Here, we investigated effects of feature-based interference from distractor objects that appear in unattended regions of the visual field with a hemifield-tracking task. Younger and older participants performed an attentional tracking task in one hemifield while distractor objects were concurrently presented in the unattended hemifield. Feature similarity between objects in the attended and unattended hemifields as well as motion speed and the number of to-be-tracked objects were parametrically manipulated. The results show that increasing feature overlap leads to greater interference from the unattended visual field. This effect of feature-based interference was only present in the slow speed condition, indicating that the interference is mainly modulated by perceptual demands. High-performing older adults showed a similar interference effect as younger adults, whereas low-performing adults showed poor tracking performance overall.
Publisher: SAGE Publications
Date: 2017
Abstract: Cricket is one of the world’s most popular sports, followed by hundreds of millions of people. It can be dangerous, played with a hard ball flying at great velocities, and accidents have occasionally been fatal. Traditionally, cricket has been played during the day, using a dark red ball. Since the late 1970s, a shorter form of one-day cricket has been played both during the day and at night under floodlights. To overcome visibility issues, one-day cricket uses a white ball, and players wear coloured clothing. There is now a desire to play a traditional form of cricket during the day and at night, using a ‘pink’ ball while players wear white clothing. Concerns regarding visibility, and player and umpire safety, have been raised in this context. Here, we report that these concerns have a sound basis.
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 26-01-2015
DOI: 10.1167/15.1.26
Abstract: Some data have been taken as evidence that after prolonged viewing, near-vertical orientations "normalize" to appear more vertical than they did previously. After almost a century of research, the existence of tilt normalization remains controversial. The most recent evidence for tilt normalization comes from data suggesting a measurable "perceptual drift" of near-vertical adaptors toward vertical, which can be nulled by a slight physical rotation away from vertical (Müller, Schillinger, Do, & Leopold, 2009). We argue that biases in estimates of perceptual stasis could, however, result from the anisotropic organization of orientation-selective neurons in V1, with vertically-selective cells being more narrowly tuned than obliquely-selective cells. We describe a neurophysiologically plausible model that predicts greater sensitivity to orientation displacements toward than away from vertical. We demonstrate the predicted asymmetric pattern of sensitivity in human observers by determining threshold speeds for detecting rotation direction (Experiment 1), and by determining orientation discrimination thresholds for brief static stimuli (Experiment 2). Results imply that data suggesting a perceptual drift toward vertical instead result from greater discrimination sensitivity around cardinal than oblique orientations (the oblique effect), and thus do not constitute evidence for tilt normalization.
Publisher: Frontiers Media SA
Date: 2011
Publisher: Elsevier BV
Date: 2001
DOI: 10.1016/S0042-6989(00)00248-0
Abstract: We investigated the effect of adaptation on orientation discrimination using two experienced observers, then replicated the main effects using a total of 50 naïve subjects. Orientation discrimination around vertical improved after adaptation to either horizontal or vertical gratings, but was impaired by adaptation at 7.5 or 15 degrees from vertical. Improvement was greatest when adapter and test were orthogonal. We show that the results can be understood in terms of a functional model of adaptation in cortical vision.
Publisher: Elsevier BV
Date: 12-2015
DOI: 10.1016/J.CONCOG.2015.10.010
Abstract: Peoples' subjective feelings of confidence typically correlate positively with objective measures of task performance, even when no performance feedback is provided. This relationship has seldom been investigated in the field of human time perception. Here we find a positive relationship between the precision of human timing perception and decisional confidence. We first demonstrate that subjective audio-visual timing judgements are more precise when people report a high, as opposed to a low, level of confidence. We then find that this relationship is more likely to result from variance in sensory timing estimates than the application of variable decision criteria, as the relationship held when we adopted a measure of timing sensitivity designed to limit the influence of subjective criteria. Our results suggest analyses of timing perception and associated decisional confidence reflect the trial-by-trial variability with which timing has been encoded.
Publisher: Elsevier BV
Date: 09-2003
DOI: 10.1016/S0042-6989(03)00120-2
Abstract: Psychophysical experiments with stimuli oscillating concurrently in colour and orientation revealed an apparently paradoxical dissociation between the perceived simultaneity of stimulus changes and the perceptual pairing of the events demarked by those changes. When subjects were required to report whether changes in colour and orientation were simultaneous, judgements were generally accurate within +/-10 ms. When subjects were required to report which colour was paired predominantly with which orientation, judgements showed a systematic temporal bias of up to 50 ms in favour of colour. This dissociation between different temporal judgements concerning the same stimulus sequence is not predicted by any of the current models of binding in conscious vision. We propose an account of these data based on the temporal response properties of colour- and orientation-selective model neurons such that the perceived pairing of visual attributes is modelled as the cross-correlation of time-varying neural response profiles and thus reflects both neuronal latencies and the rate of rapid adaptation rather than simply the temporal pattern of responses to stimulus transitions.
Publisher: Oxford University Press
Date: 05-05-2005
Publisher: Oxford University Press (OUP)
Date: 08-02-2007
Publisher: The Royal Society
Date: 10-07-2007
Abstract: As of yet, it is unclear how we determine relative perceived timing. One controversial suggestion is that timing perception might be related to when analyses are completed in the cortex of the brain. An alternate proposal suggests that perceived timing is instead related to the point in time at which cortical analyses commence . Accordingly, timing illusions should not occur owing to cortical analyses, but they could occur if there were differential delays between signals reaching cortex. Resolution of this controversy therefore requires that the contributions of cortical processing be isolated from the influence of subcortical activity. Here, we have done this by using binocular disparity changes, which are known to be detected via analyses that originate in cortex. We find that observers require longer stimulus exposures to detect small, relative to larger, disparity changes observers are slower to react to smaller disparity changes and observers misperceive smaller disparity changes as being perceptually delayed. Interestingly, disparity magnitude influenced perceived timing more dramatically than it did stimulus change detection. Our data therefore suggest that perceived timing is both influenced by cortical processing and is shaped by sensory analyses subsequent to those that are minimally necessary for stimulus change perception.
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 16-03-2010
DOI: 10.1167/3.9.190
Publisher: Springer Science and Business Media LLC
Date: 20-02-2020
DOI: 10.1038/S41598-020-59322-7
Abstract: Humans perceptual judgments are imprecise, as repeated exposures to the same physical stimulation (e.g. audio-visual inputs separated by a constant temporal offset) can result in different decisions. Moreover, there can be marked in idual differences – precise judges will repeatedly make the same decision about a given input, whereas imprecise judges will make different decisions. The causes are unclear. We examined this using audio-visual (AV) timing and confidence judgments, in conjunction with electroencephalography (EEG) and multivariate pattern classification analyses. One plausible cause of differences in timing precision is that it scales with variance in the dynamics of evoked brain activity. Another possibility is that equally reliable patterns of brain activity are evoked, but there are systematic differences that scale with precision. Trial-by-trial decoding of input timings from brain activity suggested precision differences may not result from variable dynamics. Instead, precision was associated with evoked responses that were exaggerated (more different from baseline) ~300 ms after initial physical stimulations. We suggest excitatory and inhibitory interactions within a winner-take-all neural code for AV timing might exaggerate responses, such that evoked response magnitudes post-stimulation scale with encoding success.
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 06-2015
DOI: 10.1167/15.8.1
Abstract: After looking at a photograph of someone for a protracted period (adaptation), a previously neutral-looking face can take on an opposite appearance in terms of gender, identity, and other attributes-but what happens to the appearance of other faces? Face aftereffects have repeatedly been ascribed to perceptual renormalization. Renormalization predicts that the adapting face and more extreme versions of it should appear more neutral after adaptation (e.g., if the adaptor was male, it and hyper-masculine faces should look more feminine). Other aftereffects, such as tilt and spatial frequency, are locally repulsive, exaggerating differences between adapting and test stimuli. This predicts that the adapting face should be little changed in appearance after adaptation, while more extreme versions of it should look even more extreme (e.g., if the adaptor was male, it should look unchanged, while hyper-masculine faces should look even more masculine). Existing reports do not provide clear evidence for either pattern. We overcame this by using a spatial comparison task to measure the appearance of stimuli presented in differently adapted retinal locations. In behaviorally matched experiments we compared aftereffect patterns after adapting to tilt, facial identity, and facial gender. In all three experiments data matched the predictions of a locally repulsive, but not a renormalizing, aftereffect. These data are consistent with the existence of similar encoding strategies for tilt, facial identity, and facial gender.
Publisher: Elsevier BV
Date: 06-2023
Publisher: American Psychological Association (APA)
Date: 05-2017
DOI: 10.1037/XHP0000368
Abstract: Humans might possess either a single (amodal) internal clock or multiple clocks for different sensory modalities. Sensitivity could be improved by the provision of multiple signals. Such improvements can be predicted quantitatively, assuming estimates are combined by summation, a process described as optimal when summation is weighted in accordance with the variance associated with each of the initially independent estimates. This possibility was assessed for visual and tactile information regarding temporal intervals. In Experiment 1, 12 musicians and 12 nonmusicians judged durations of 300 and 600 ms, compared to test values spanning these standards. Bimodal precision increased relative to unimodal conditions, but not to the extent predicted by optimally weighted summation. In Experiment 2, 6 musicians and 6 other participants each judged 6 standards, ranging from 100 ms to 600 ms, with conflicting cues providing a measure of the weight assigned to each sensory modality. A weighted integration model best fitted these data, with musicians more likely to select near-optimal weights than nonmusicians. Overall, data were consistent with the existence of separate visual and tactile clock components at either the counter/integrator or memory stages. Independent estimates are passed to a decisional process, but not always combined in a statistically optimal fashion. (PsycINFO Database Record
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 08-2008
DOI: 10.1167/8.10.3
Publisher: Springer Science and Business Media LLC
Date: 09-08-2021
DOI: 10.1038/S41598-021-95295-X
Abstract: Prediction is a core function of the human visual system. Contemporary research suggests the brain builds predictive internal models of the world to facilitate interactions with our dynamic environment. Here, we wanted to examine the behavioural and neurological consequences of disrupting a core property of peoples’ internal models, using naturalistic stimuli. We had people view videos of basketball and asked them to track the moving ball and predict jump shot outcomes, all while we recorded eye movements and brain activity. To disrupt people’s predictive internal models, we inverted footage on half the trials, so dynamics were inconsistent with how movements should be shaped by gravity. When viewing upright videos people were better at predicting shot outcomes, at tracking the ball position, and they had enhanced alpha-band oscillatory activity in occipital brain regions. The advantage for predicting upright shot outcomes scaled with improvements in ball tracking and occipital alpha-band activity. Occipital alpha-band activity has been linked to selective attention and spatially-mapped inhibitions of visual brain activity. We propose that when people have a more accurate predictive model of the environment, they can more easily parse what is relevant, allowing them to better target irrelevant positions for suppression—resulting in both better predictive performance and in neural markers of inhibited information processing.
Publisher: Elsevier BV
Date: 03-2008
DOI: 10.1016/J.VISRES.2008.01.020
Abstract: Rendering the usually visible 'invisible' has long been a popular experimental manipulation. With one notable exception, 'continuous flash suppression' [Tsuchiya, N., & Koch, C. (2005). Continuous flash suppression reduces negative afterimages. Nature Neuroscience, 8, 1096-1101], existing methods of achieving this goal suffer from being either unable to suppress stimuli from awareness for prolonged periods, from being unable to reliably suppress stimuli at specific epochs, or from a combination of both of these limitations. Here we report a new method, binocular switch suppression (BSS), which overcomes these restrictions. We establish that BSS is novel as it taps a different causal mechanism to the only similar pre-existing method. We also establish that BSS is superior to pre-existing methods both in terms of the depth and duration of perceptual suppression achieved. BSS should therefore prove to be a useful tool for the large number of researchers interested in exploring the neural correlates and functional consequences of conscious visual awareness.
Publisher: Elsevier BV
Date: 2023
Publisher: Proceedings of the National Academy of Sciences
Date: 17-10-2016
Abstract: Distinct anatomical visual pathways can be traced through the human central nervous system. These have been linked to specialized functions, such as encoding information about spatial forms (like the human face and text) and stimulus dynamics (flicker or movement). Our experiments are inconsistent with this strict ision. They show that mechanisms responsive to flicker can alter form perception, with vision transiently sharpened by weakening the influence of flicker-sensitive mechanisms by prolonged exposure to flicker. So, next time you are trying to read fine print, you might be well advised to first view a flickering stimulus!
Publisher: Elsevier BV
Date: 03-2006
DOI: 10.1016/J.CUB.2006.01.032
Abstract: A fundamental question about the perception of time is whether the neural mechanisms underlying temporal judgements are universal and centralized in the brain or modality specific and distributed. Time perception has traditionally been thought to be entirely dissociated from spatial vision. Here we show that the apparent duration of a dynamic stimulus can be manipulated in a local region of visual space by adapting to oscillatory motion or flicker. This implicates spatially localized temporal mechanisms in duration perception. We do not see concomitant changes in the time of onset or offset of the test patterns, demonstrating a direct local effect on duration perception rather than an indirect effect on the time course of neural processing. The effects of adaptation on duration perception can also be dissociated from motion or flicker perception per se. Although 20 Hz adaptation reduces both the apparent temporal frequency and duration of a 10 Hz test stimulus, 5 Hz adaptation increases apparent temporal frequency but has little effect on duration perception. We conclude that there is a peripheral, spatially localized, essentially visual component involved in sensing the duration of visual events.
Publisher: Springer Science and Business Media LLC
Date: 14-06-2022
DOI: 10.3758/S13414-022-02519-X
Abstract: Repeated events can seem shortened. It has been suggested that this results from an inverse relationship between predictability and perceived duration, with more predictable events seeming shorter. Some evidence disputes this generalisation, as there are cases where this relationship has been nullified, or even reversed. This study sought to combine different factors that encourage expectation into a single paradigm, to directly compare their effects. We find that when people are asked to declare a prediction (i.e., to predict which colour sequence will ensue), guess-confirming events can seem relatively protracted. This augmented a positive time-order error, with the first of two sequential presentations already seeming protracted. We did not observe a contraction of perceived duration for more probable or for repeated events. Overall, our results are inconsistent with a simple mapping between predictability and perceived duration. Whether the perceived duration of an expected event will seem relatively contracted or expanded seems to be contingent on the causal origin of expectation.
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 15-03-2002
DOI: 10.1167/2.7.264
Publisher: Center for Open Science
Date: 19-10-2023
Publisher: The Royal Society
Date: 22-03-2002
Publisher: Public Library of Science (PLoS)
Date: 17-12-2009
Publisher: Springer Science and Business Media LLC
Date: 08-07-2020
Publisher: Elsevier BV
Date: 12-2011
DOI: 10.1016/J.CONCOG.2011.07.003
Abstract: In timing perception studies, the timing of one event is usually manipulated relative to another, and participants are asked to judge if the two events were synchronous, or to judge which of the two events occurred first. Responses are analyzed to determine a measure of central tendency, which is taken as an estimate of the timing at which the two events are perceptually synchronous. When these estimates do not coincide with physical synchrony, it is often assumed that the sensory signals are asynchronous, as though the transfer of information concerning one input has been accelerated or decelerated relative to the other. Here we show that, while this is a viable interpretation, it is equally plausible that such effects are driven by shifts in the criteria used to differentiate simultaneous from asynchronous inputs. Our analyses expose important ambiguities concerning the interpretation of simultaneity judgement data, which have hitherto been underappreciated.
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 12-08-2010
DOI: 10.1167/10.10.8
Abstract: When discrepant images are shown to the two eyes, each can intermittently disappear. This is known as binocular rivalry (BR). The causes of BR are debated. One view is that BR is driven by a low-level visual process, characterized by competition between monocular channels. Another is that BR is driven by higher level processes involved in interpreting ambiguous input. This would link BR to other phenomena, wherein perception changes without input changes. We reasoned that if this were true, the timing of BR changes might be related to the timing of changes in other multi-stable stimuli. We tested this using combinations of simple (orthogonal gratings) and complex (pictures of houses and faces) stimuli. We also presented simple stimuli in conjunction with a stimulus that induced an ambiguous direction of rotation. We found that the timing of simple BR changes was unrelated to the timing of either complex BR changes or to direction changes within an ambiguous rotation. However, the timings of changes within proximate BR stimuli, both simple and complex, were related, but only when similar images were encoded in the same monocular channels. These observations emphasize the importance of monocular channel interactions in determining the timing of binocular rivalry changes.
Publisher: Elsevier BV
Date: 02-2009
DOI: 10.1016/J.CUB.2008.12.053
Abstract: In motion-induced blindness (MIB), persistent static targets intermittently disappear when presented near moving elements [1, 2]. There is currently no consensus regarding the cause or causes of MIB [3-7]. Here, we link the phenomenon to a mechanism that is integral for normal human vision, motion streak suppression [8]. The human visual system integrates information over time [9], resulting in streaks of activity across visual brain regions when objects move [10, 11]. These "motion streaks" are usually suppressed from awareness. Our results suggest that this process shapes MIB. We show that MIB is enhanced at the trailing edges of movement and that both MIB and motion streak suppression are impaired at equiluminance. These findings suggest that an apparent failure of human vision, MIB, is at least partially driven by a functional adaptation that facilitates clear perceptions of moving form.
Publisher: American Astronomical Society
Date: 08-02-2019
Publisher: Elsevier BV
Date: 02-2021
Publisher: Elsevier BV
Date: 11-2021
Publisher: Elsevier BV
Date: 05-2018
DOI: 10.1016/J.VISRES.2017.11.004
Abstract: When a moving surface alternates in colour and direction, perceptual couplings of colour and motion can differ from their physical correspondence. Periods of motion tend to be perceptually bound with physically delayed colours - a colour/motion perceptual asynchrony. This can be eliminated by motion transparency. Here we show that the colour/motion perceptual asynchrony is not invariably eliminated by motion transparency. Nor is it an inevitable consequence given a particular physical input. Instead, it can emerge when moving surfaces are perceived as alternating in direction, even if those surfaces seem transparent, and it is eliminated when surfaces are perceived as moving invariably. For a given observer either situation can result from exposure to a common input. Our findings suggest that neural events that promote the perception of motion reversals are causal of the colour/motion perceptual asynchrony. Moreover, they suggest that motion transparency and coherence can be signalled simultaneously by subpopulations of direction-selective neurons, with this conflict instantaneously resolved by a competitive winner-takes-all interaction, which can instantiate or eliminate colour/motion perceptual asynchrony.
Publisher: Springer Science and Business Media LLC
Date: 24-01-2022
DOI: 10.1038/S41598-022-05289-6
Abstract: One of the seminal findings of cognitive neuroscience is that the power of occipital alpha-band (~ 10 Hz) brain waves is increased when peoples’ eyes are closed, rather than open. This has encouraged the view that alpha oscillations are a default dynamic, to which the visual brain returns in the absence of input. Accordingly, we might be unable to increase the power of alpha oscillations when the eyes are closed, above the level that would normally ensue when people close their eyes. Here we report counter evidence. We used electroencephalography (EEG) to record brain activity when people had their eyes open and closed, both before and after they had adapted to radial motion. The increase in alpha power when people closed their eyes was increased by prior adaptation to a broad range of radial motion speeds. This effect was greatest for 10 Hz motion, but robust for other frequencies (and especially 7.5 Hz). This discredits a persistent entrainment of activity at the adaptation frequency as an explanation for our findings. Our data show that the power of occipital alpha-band brain waves can be increased by motion sensitive visual processes that persist when the eyes are closed. Consequently, we suggest that the power of these brain waves is, at least in part, an index of the degree to which visual brain activity is being subjected to inhibition. This is increased when people close their eyes, but can be even further increased by pre-adaptation to radial motion.
Publisher: Elsevier BV
Date: 10-2023
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 15-03-2001
DOI: 10.1167/1.3.365
Publisher: Frontiers Media SA
Date: 2012
Publisher: Elsevier BV
Date: 02-2022
DOI: 10.1016/J.CORTEX.2021.10.012
Abstract: When a visual event is unexpected, because it violates a train of repeated events, it excites a greater positive electrical potential at sensors positioned above occipital-parietal human brain regions (the P300). Such events can also seem to have an increased duration relative to repeated (implicitly expected) events. However, recent behavioural evidence suggests that when events are unexpected because they violate a declared prediction-a guess-there is an opposite impact on duration perception. The neural consequences of incorrect declared predictions have not been examined. We replicated the finding whereby repetition violating events elicit a larger P300 response. However, we found that events that violated a declared prediction entrained an opposite pattern of response-a smaller P300. These data suggest that the neural consequences of a violated prediction are not uniform but depend on how the prediction was formed.
Publisher: Brill
Date: 24-08-2020
DOI: 10.1163/22134468-BJA10013
Abstract: Items in working memory are typically defined by various attributes, such as colour (for visual objects) and pitch (for auditory objects). The attribute of duration can be signalled by multiple modalities, but has received relatively little attention from a working-memory perspective. While the existence of specialist stores (e.g., the phonological loop and visuospatial sketchpad) is often asserted in the wider working-memory literature, the interval-timing literature has more often implied a unitary (amodal) store. Here we combine two modelling frameworks to probe the basis of working memory for duration a Bayesian-observer framework, previously used to explain behaviour in duration-reproduction tasks, and mixture models, describing distributions of continuous reports about items in working memory. We modelled different storage mechanisms, such as a limited number of fixed-resolution slots or a resource spread between items at a cost to resolution, in order to ask whether items from different sensory modalities are maintained in separate, independent stores. We initially analysed data from 32 participants, who memorised between one and eight items before reproducing the duration of a randomly selected target. In separate blocks, items could be all visual, all auditory, or an alternating mixture of both. A small control experiment included a further condition with precuing of target modality. Certain kinds of slot models, resource models, and combination models incorporating both mechanisms could account for the data. However, looking across all plausible models, the decline in performance with increasing memory load was most consistent with a single store for event durations, regardless of stimulus modality.
Publisher: Springer Science and Business Media LLC
Date: 26-11-2019
DOI: 10.3758/S13414-019-01899-X
Abstract: In the visual oddball paradigm, surprising inputs can seem expanded in time relative to unsurprising repeated events. A horizontal input embedded in a train of successive vertical inputs can, for instance, seem relatively protracted in time, even if all inputs are presented for an identical duration. It is unclear if this effect results from surprising events becoming apparently protracted, or from repeated events becoming apparently contracted in time. To disambiguate, we used a non-relative duration reproduction task, in which several standards preceded a test stimulus that had to be reproduced. We manipulated the predictability of test content over successive presentations. Overall, our data suggest that predictable stimuli induce a contraction of apparent duration (Experiments 1, 3, and 4). We also examine sensitivity to test content, and find that predictable stimuli elicit less uptake of visual information (Experiments 2 and 3). We discuss these findings in relation to the predictive coding framework.
Publisher: The Royal Society
Date: 08-09-2010
Abstract: Our sense of relative timing is malleable. For instance, visual signals can be made to seem synchronous with earlier sounds following prolonged exposure to an environment wherein auditory signals precede visual ones. Similarly, actions can be made to seem to precede their own consequences if an artificial delay is imposed for a period, and then removed. Here, we show that our sense of relative timing for combinations of visual changes is similarly pliant. We find that direction reversals can be made to seem synchronous with unusually early colour changes after prolonged exposure to a stimulus wherein colour changes precede direction changes. The opposite effect is induced by prolonged exposure to colour changes that lag direction changes. Our data are consistent with the proposal that our sense of timing for changes encoded by distinct sensory mechanisms can adjust, at least to some degree, to the prevailing environment. Moreover, they reveal that visual analyses of colour and motion are sufficiently independent for this to occur.
Publisher: Springer Science and Business Media LLC
Date: 26-03-2019
DOI: 10.1038/S41598-018-37888-7
Abstract: Information from different sensory modalities can interact, shaping what we think we have seen, heard, or otherwise perceived. Such interactions can enhance the precision of perceptual decisions, relative to those based on information from a single sensory modality. Several computational processes could account for such improvements. Slight improvements could arise if decisions are based on multiple independent sensory estimates, as opposed to just one. Still greater improvements could arise if initially independent estimates are summed to form a single integrated code. This hypothetical process has often been described as optimal when it results in bimodal performance consistent with a summation of unimodal estimates weighted in proportion to the precision of each initially independent sensory code. Here we examine cross-modal cue combination for audio-visual temporal rate and spatial location cues. While suggestive of a cross-modal encoding advantage, the degree of facilitation falls short of that predicted by a precision weighted summation process. These data accord with other published observations, and suggest that precision weighted combination is not a general property of human cross-modal perception.
Publisher: SAGE Publications
Date: 20-06-2011
Abstract: Audiovisual timing perception can recalibrate following prolonged exposure to asynchronous auditory and visual inputs. It has been suggested that this might contribute to achieving perceptual synchrony for auditory and visual signals despite differences in physical and neural signal times for sight and sound. However, given that people can be concurrently exposed to multiple audiovisual stimuli with variable neural signal times, a mechanism that recalibrates all audiovisual timing percepts to a single timing relationship could be dysfunctional. In the experiments reported here, we showed that audiovisual temporal recalibration can be specific for particular audiovisual pairings. Participants were shown alternating movies of male and female actors containing positive and negative temporal asynchronies between the auditory and visual streams. We found that audiovisual synchrony estimates for each actor were shifted toward the preceding audiovisual timing relationship for that actor and that such temporal recalibrations occurred in positive and negative directions concurrently. Our results show that humans can form multiple concurrent estimates of appropriate timing for audiovisual synchrony.
Publisher: Elsevier BV
Date: 11-2005
DOI: 10.1016/J.VISRES.2005.06.031
Abstract: Observers often pair colours with earlier periods of motion. This observation has prompted the proposal that changes in colour are processed faster and perceived as occurring before physically coincident changes in direction--a brain-time account. Alternatively, it has been proposed that the sudden onset of a surface, or a direction reversal within a persistent surface, can trigger an analysis that determines the perceptual properties of the surface. Hypothetically, this analysis persists for some period of time and the consequences are perceived as having occurred when the analysis commenced--a post-dictive account. Hypotheses based upon these alternate accounts are contrasted in a series of experiments. It is shown that the optimal conditions for pairing specific combinations of colour and motion arise when colour changes are delayed relative to direction changes. In these conditions observers can pair more rapid oscillations of colour and motion and perceptual pairings are more systematic relative to when the changes in colour and direction are physically synchronous. It is also shown that, when pairing colour and motion, the sudden onset of a moving surface does not have the same consequences as a direction reversal within a persistent surface. These findings are consistent with the brain-time, but are inconsistent with the post-dictive, account of perceptual asynchrony.
Publisher: Cold Spring Harbor Laboratory
Date: 14-10-2020
DOI: 10.1101/2020.10.13.338285
Abstract: Visual objects that extend across physiological blind spots seem to encapsulate the extent of blindness, due to a process commonly referred to as a perceptual filling-in of spatial vision. It is unclear if temporal perception is similar, so we examined temporal relationships governing causality perception across the blind spot. We found the human brain does not allow for the time an object should take to traverse the blind-spot when engaging in a causal interaction. We also used electroencephalogram (EEG), to examine temporal signatures of elements flickering on and off in tandem, or in counter-phase. At a control site, we found more brain activity was entrained at the duty cycle by flicker relative to counter-phase changes, whereas these conditions were indistinguishable about blind spots. Our data suggest a common pool of neurons might encode temporal properties on either side of physiological blind-spots. This would explain the absence of any allowance for the extent of blindness in causality perception, and the weakened differences between temporal representations of flicker and counter-phased changes about the blind spot. Overall, our data suggest that, unlike spatial vision, there is no temporal filling-in for perceptual representations about physiological blind spots.
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 10-07-2015
DOI: 10.1167/15.9.4
Abstract: Visual aftereffects are characterized by a changed perceptual experience after exposure to a visual input. For instance, exposure to rightward motion can make a static input seem to drift leftward-the motion aftereffect. Such aftereffects have been integral to building our understanding of the neural mechanisms and computational processes that underlie perception. Increasingly complex characteristics have been found to be susceptible to visual aftereffects, such as the appearance of human faces, the apparent number of visual elements, and the glossiness of a surface. Here we report that the apparent elasticity, or squishiness, of an object is also subject to a visual aftereffect. This relationship can explain data previously interpreted in terms of a causality aftereffect.
Publisher: Wiley
Date: 07-07-2022
DOI: 10.1002/HBM.25996
Abstract: The physiological blind spot is a naturally occurring scotoma corresponding with the optic disc in the retina of each eye. Even during monocular viewing, observers are usually oblivious to the scotoma, in part because the visual system extrapolates information from the surrounding area. Unfortunately, studying this visual field region with neuroimaging has proven difficult, as it occupies only a small part of retinotopic cortex. Here, we used functional magnetic resonance imaging and a novel data‐driven method for mapping the retinotopic organization in and around the blind spot representation in V1. Our approach allowed for highly accurate reconstructions of the extent of an observer’s blind spot, and out‐performed conventional model‐based analyses. This method opens exciting opportunities to study the plasticity of receptive fields after visual field loss, and our data add to evidence suggesting that the neural circuitry responsible for impressions of perceptual completion across the physiological blind spot most likely involves regions of extrastriate cortex—beyond V1.
Publisher: Elsevier BV
Date: 06-2023
Publisher: Elsevier BV
Date: 07-2011
Publisher: Elsevier BV
Date: 08-2015
Publisher: Elsevier BV
Date: 05-2012
DOI: 10.1016/J.VISRES.2012.03.010
Abstract: Visual information is an essential guide when interacting with moving objects, yet it can also be deceiving. For instance, motion can induce illusory position shifts, such that a moving ball can seem to have bounced past its true point of contact with the ground. Some evidence suggests illusory motion-induced position shifts bias pointing tasks to a greater extent than they do perceptual judgments. This, however, appears at odds with other findings and with our success when intercepting moving objects. Here we examined the accuracy of interceptive movements and of perceptual judgments in relation to simulated bounces. Participants were asked to intercept a moving disc at its bounce location by positioning a virtual paddle, and then to report where the disc had landed. Results showed that interceptive actions were accurate whereas perceptual judgments were inaccurate, biased in the direction of motion. Successful interceptions necessitated accurate information concerning both the location and timing of the bounce, so motor planning evidently had privileged access to an accurate forward model of bounce timing and location. This would explain why people can be accurate when intercepting a moving object, but lack insight into the accurate information that had guided their actions when asked to make a perceptual judgment.
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 26-02-2008
DOI: 10.1167/8.2.11
Publisher: Elsevier BV
Date: 05-2010
DOI: 10.1016/J.CUB.2010.02.068
Abstract: Retinal image size is not the sole determinant of the apparent size of objects. Rather, viewing distance is taken into account when determining apparent size, so images of the same physical dimensions can appear to represent different-sized objects. Here, we take advantage of this to examine the relationship between visual sensitivity and the scaling processes involved in determining apparent size. We assess the impact of illusory size changes, induced by apparent viewing distance changes, on judgments concerning clearly visible stimuli and on the ability to detect low-contrast inputs. We find that sensitivity to slight orientation changes between successive and clearly visible stimuli can scale with illusory size changes. However, illusory size changes have no discernable impact on the ability to detect low-contrast inputs. When considered in conjunction with recent brain imaging studies, our data suggest that visual sensitivity is linked to the spread of activity across primary visual cortex, which for clearly visible stimuli is shaped by the scaling processes involved in the determination of apparent size.
Publisher: SAGE Publications
Date: 19-03-2010
Abstract: In human vision, mechanisms specialized for encoding static form can signal the presence of blurred forms trailing behind moving objects. People are typically unaware of these motion-blur signals because other mechanisms signal sharply defined moving forms. When active, these mechanisms can suppress awareness of motion blur. Thus, although discrepant form signals can be produced, human vision usually settles on a single coherent perceptual outcome. Here we report a dramatic exception. We found that, in some circumstances, static motion-blur form signals and moving-form signals can engage in a dynamic competition for perceptual dominance. We refer to the phenomenon as spatiotemporal rivalry (STR). Our data confirm that moving- and static-form mechanisms can generate independent signals, each of which can intermittently dominate perception. STR could therefore be exploited to investigate how these mechanisms contribute to determining the content of visual awareness.
Start Date: 2014
End Date: 12-2017
Amount: $832,708.00
Funder: Australian Research Council
View Funded ActivityStart Date: 2011
End Date: 12-2014
Amount: $171,722.00
Funder: Australian Research Council
View Funded ActivityStart Date: 04-2006
End Date: 12-2009
Amount: $320,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 04-2020
End Date: 12-2024
Amount: $365,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 2014
End Date: 12-2017
Amount: $163,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 2009
End Date: 01-2014
Amount: $394,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 02-2018
End Date: 11-2021
Amount: $199,412.00
Funder: Australian Research Council
View Funded ActivityStart Date: 2008
End Date: 12-2011
Amount: $181,000.00
Funder: Australian Research Council
View Funded Activity