ORCID Profile
0000-0003-2216-7461
Current Organisation
National Institutes of Health
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
In Research Link Australia (RLA), "Research Topics" refer to ANZSRC FOR and SEO codes. These topics are either sourced from ANZSRC FOR and SEO codes listed in researchers' related grants or generated by a large language model (LLM) based on their publications.
Sensory Processes, Perception and Performance | Psychology
Publisher: Proceedings of the National Academy of Sciences
Date: 16-07-2018
Abstract: The primate brain is specialized for social interaction, and a complex network of brain regions supports this important function. Face perception is central to social development, and both humans and nonhuman primates exhibit a spontaneous viewing preference for faces. This shared involuntary response underscores the importance of faces in the earliest stages of cognitive development, yet its neural basis is not well understood. Here we report that bilateral amygdala lesions in rhesus monkeys eliminate the robust viewing preference for both real faces and illusory faces. This demonstrates a fundamental role for the amygdala in guiding eye movements toward face stimuli, a critical behavior for normal social development and social interaction.
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 31-12-2010
DOI: 10.1167/10.14.38
Abstract: Visual overlay masking is typically studied with a mask and target located at the same depth plane. Masking is reduced when binocular disparity separates the target from the mask (G. Moraglia & B. Schneider, 1990). We replicate this finding for a broadband target masked by natural images and find the greatest masking (threshold elevation) when target and mask occupy the same depth plane. Masking was reduced equally whether the target appeared at a crossed or an uncrossed disparity. We measure the tuning of masking and determine the extent of the benefit afforded by disparity. Threshold elevation decreases monotonically with increasing disparity until ±8 arcmin. Two underlying components to the masking are evident one accounts for around two-thirds of the masking and is independent of disparity. The second component is disparity-dependent and results in additional masking when there is zero disparity. Importantly, the reduction in masking with disparity cannot be explained by interocular decorrelation we use a single-interval orientation discrimination task to exclude this possibility. We conclude that when the target and mask are presented at different depths they activate distinct populations of disparity-tuned neurons, resulting in less masking of the target.
Publisher: Elsevier BV
Date: 10-2017
DOI: 10.1016/J.NEUROPSYCHOLOGIA.2017.02.013
Abstract: Visual object recognition is a complex, dynamic process. Multivariate pattern analysis methods, such as decoding, have begun to reveal how the brain processes complex visual information. Recently, temporal decoding methods for EEG and MEG have offered the potential to evaluate the temporal dynamics of object recognition. Here we review the contribution of M/EEG time-series decoding methods to understanding visual object recognition in the human brain. Consistent with the current understanding of the visual processing hierarchy, low-level visual features dominate decodable object representations early in the time-course, with more abstract representations related to object category emerging later. A key finding is that the time-course of object processing is highly dynamic and rapidly evolving, with limited temporal generalisation of decodable information. Several studies have examined the emergence of object category structure, and we consider to what degree category decoding can be explained by sensitivity to low-level visual features. Finally, we evaluate recent work attempting to link human behaviour to the neural time-course of object processing.
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 08-02-2013
DOI: 10.1167/13.2.16
Abstract: In binocular viewing of natural three-dimensional scenes, occlusion relationships between objects at different depths create regions of the background that are visible to only one eye. These monocular regions can support depth perception. There are two viewing conditions in which a monocular region can be on the nasal side of a binocular surface--(a) when a background surface is viewed through an aperture and (b) when a region is camouflaged against the background in one eye's view. We created stimuli with a monocular region using complex textures in which camouflage was not possible, and for which there was no physical aperture. For these stimuli, observers perceived a strong phantom contour in near depth at the edge of the monocular region, with the monocular texture perceived behind at the depth of the binocular surface. Depth-matching with a probe showed that the depth of the phantom occluding surface was as precise as for stimuli with regular binocular disparity. Monocular regions of texture on the opposite (temporal) side of the binocular surface were perceived behind, as predicted by occlusion geometry, and there was no phantom surface. We discuss the implications for models of da Vinci stereopsis and stereoscopic edge processing, and consider the involvement of a form of Panum's limiting case. We conclude that the visual system uses a combination of occlusion geometry and complex matching to precisely locate edges in depth that lack a luminance contour.
Publisher: Springer Science and Business Media LLC
Date: 25-03-2021
Publisher: Elsevier BV
Date: 2020
Publisher: Elsevier BV
Date: 07-2014
DOI: 10.1016/J.VISRES.2014.04.012
Abstract: Gradients of absolute binocular disparity across a slanted surface are often considered the basis for stereoscopic slant perception. However, perceived stereo slant around a vertical axis is usually slow and significantly under-estimated for isolated surfaces. Perceived slant is enhanced when surrounding surfaces provide a relative disparity gradient or depth step at the edges of the slanted surface, and also in the presence of monocular occlusion regions (sidebands). Here we investigate how different kinds of depth information at surface edges enhance stereo slant about a vertical axis. In Experiment 1, perceived slant decreased with increasing surface width, suggesting that the relative disparity between the left and right edges was used to judge slant. Adding monocular sidebands increased perceived slant for all surface widths. In Experiment 2, observers matched the slant of surfaces that were isolated or had a context of either monocular or binocular sidebands in the frontal plane. Both types of sidebands significantly increased perceived slant, but the effect was greater with binocular sidebands. These results were replicated in a second paradigm in which observers matched the depth of two probe dots positioned in front of slanted surfaces (Experiment 3). A large bias occurred for the surface without sidebands, yet this bias was reduced when monocular sidebands were present, and was nearly eliminated with binocular sidebands. Our results provide evidence for the importance of edges in stereo slant perception, and show that depth from monocular occlusion geometry and binocular disparity may interact to resolve complex 3D scenes.
Publisher: Cold Spring Harbor Laboratory
Date: 13-12-2017
DOI: 10.1101/233387
Abstract: The neural mechanisms underlying face and object recognition are understood to originate in ventral occipital-temporal cortex. A key feature of the functional architecture of the visual ventral pathway is its category-selectivity, yet it is unclear how category-selective regions process ambiguous visual input which violates category boundaries. One ex le is the spontaneous misperception of faces in inanimate objects such as the Man in the Moon, in which an object belongs to more than one category and face perception is orced from its usual diagnostic visual features. We used fMRI to investigate the representation of illusory faces in category-selective regions. The perception of illusory faces was decodable from activation patterns in the fusiform face area (FFA) and lateral occipital complex (LOC), but not from other visual areas. Further, activity in FFA was strongly modulated by the perception of illusory faces, such that even objects with vastly different visual features were represented similarly if all images contained an illusory face. The results show that the FFA is broadly-tuned for face detection, not finely-tuned to the homogenous visual properties that typically distinguish faces from other objects. A complete understanding of high-level vision will require explanation of the mechanisms underlying natural errors of face detection.
Publisher: F1000 Research Ltd
Date: 11-06-2020
DOI: 10.12688/F1000RESEARCH.22296.1
Abstract: Object recognition is the ability to identify an object or category based on the combination of visual features observed. It is a remarkable feat of the human brain, given that the patterns of light received by the eye associated with the properties of a given object vary widely with simple changes in viewing angle, ambient lighting, and distance. Furthermore, different exemplars of a specific object category can vary widely in visual appearance, such that successful categorization requires generalization across disparate visual features. In this review, we discuss recent advances in understanding the neural representations underlying object recognition in the human brain. We highlight three current trends in the approach towards this goal within the field of cognitive neuroscience. Firstly, we consider the influence of deep neural networks both as potential models of object vision and in how their representations relate to those in the human brain. Secondly, we review the contribution that time-series neuroimaging methods have made towards understanding the temporal dynamics of object representations beyond their spatial organization within different brain regions. Finally, we argue that an increasing emphasis on the context (both visual and task) within which object recognition occurs has led to a broader conceptualization of what constitutes an object representation for the brain. We conclude by identifying some current challenges facing the experimental pursuit of understanding object recognition and outline some emerging directions that are likely to yield new insight into this complex cognitive process.
Publisher: The Royal Society
Date: 07-07-2021
Abstract: Facial expressions are vital for social communication, yet the underlying mechanisms are still being discovered. Illusory faces perceived in objects (face pareidolia) are errors of face detection that share some neural mechanisms with human face processing. However, it is unknown whether expression in illusory faces engages the same mechanisms as human faces. Here, using a serial dependence paradigm, we investigated whether illusory and human faces share a common expression mechanism. First, we found that images of face pareidolia are reliably rated for expression, within and between observers, despite varying greatly in visual features. Second, they exhibit positive serial dependence for perceived facial expression, meaning an illusory face (happy or angry) is perceived as more similar in expression to the preceding one, just as seen for human faces. This suggests illusory and human faces engage similar mechanisms of temporal continuity. Third, we found robust cross-domain serial dependence of perceived expression between illusory and human faces when they were interleaved, with serial effects larger when illusory faces preceded human faces than the reverse. Together, the results support a shared mechanism for facial expression between human faces and illusory faces and suggest that expression processing is not tightly bound to human facial features.
Publisher: Proceedings of the National Academy of Sciences
Date: 24-01-2022
Abstract: Face pareidolia is the phenomenon of perceiving illusory faces in inanimate objects. Here we show that illusory faces engage social perception beyond the detection of a face: they have a perceived age, gender, and emotional expression. Additionally, we report a striking bias in gender perception, with many more illusory faces perceived as male than female. As illusory faces do not have a biological sex, this bias is significant in revealing an asymmetry in our face evaluation system given minimal information. Our result demonstrates that the visual features that are sufficient for face detection are not generally sufficient for the perception of female. Instead, the perception of a nonhuman face as female requires additional features beyond that required for face detection.
Publisher: Elsevier BV
Date: 08-2017
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 31-08-2017
DOI: 10.1167/17.10.797
Publisher: Cold Spring Harbor Laboratory
Date: 21-07-2017
DOI: 10.1101/166660
Abstract: Animacy is a robust organizing principle amongst object category representations in the human brain. Using multivariate pattern analysis methods (MVPA), it has been shown that distance to the decision boundary of a classifier trained to discriminate neural activation patterns for animate and inanimate objects correlates with observer reaction times for the same animacy categorization task (Carlson, Ritchie, Kriegeskorte, Durvasula, & Ma, 2014 Ritchie, Tovar, & Carlson, 2015). Using MEG decoding, we tested if the same relationship holds when a stimulus manipulation (degradation) increases task difficulty, which we predicted would systematically decrease the distance of activation patterns from the decision boundary, and increase reaction times. In addition, we tested whether distance to the classifier boundary correlates with drift rates in the Linear Ballistic Accumulator (Brown & Heathcote, 2008). We found that distance to the classifier boundary correlated with reaction time, accuracy, and drift rates in an animacy categorization task. Split by animacy, the correlations between brain and behavior were sustained for longer over the time course for animate than for inanimate stimuli. Interestingly, when examining the distance to the classifier boundary during the peak correlation between brain and behavior, we found that only degraded versions of animate, but not inanimate, objects had systematically shifted towards the classifier decision boundary as predicted. Our results support an asymmetry in the representation of animate and inanimate object categories in the human brain.
Publisher: Elsevier BV
Date: 04-2015
DOI: 10.1016/J.NEUROIMAGE.2015.02.009
Abstract: Multivariate pattern analysis (MVPA) has become an increasingly popular approach to fMRI research because these methods offer the attractive possibility of "decoding" the content of brain representations. One weakness of MVPA is that the source of decodable information is not always apparent, as evidenced by the ongoing debate about orientation decoding in human visual cortex. In a recent study (Carlson, 2014), we used an unbiased model of visual cortex to reveal a new source of decodable information that may account for orientation decoding. Clifford and Mannion (2015) take issue with the model's capacity to decode spiral sense. Here, we discuss their findings in the context of the ongoing debate on orientation decoding and further highlight the limitations of using MVPA to infer the content of brain representations.
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 31-08-2017
DOI: 10.1167/17.10.294
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 03-12-2010
DOI: 10.1167/10.14.1
Abstract: To study the effect of blur adaptation on accommodative variability, accommodative responses and pupil diameters in myopes (n = 22) and emmetropes (n = 19) were continuously measured before, during, and after exposure to defocus blur. Accommodative and pupillary response measurements were made by an autorefractor during a monocular reading exercise. The text was presented on a computer screen at 33 cm viewing distance with a rapid serial visual presentation paradigm. After baseline testing and a 5-min rest, blur was induced by wearing either an optimally refractive lens, or a +1.0 DS or a +3.0 DS defocus lens. Responses were continuously measured during a 5-min period of adaptation. The lens was then removed, and measurements were again made during a 5-min post-adaptation period. After a second 5-min rest, a final post-adaptation period was measured. No significant change of baseline accommodative responses was found after the 5-min period of adaptation to the blurring lenses (p > 0.05). Compared to the pre-adaptation level, both refractive groups had similar and significant increases in accommodative variability right after blur adaptation to both defocus lenses. After the second rest period, the accommodative variability in both groups returned to the pre-adaptation level. The results indicate that blur adaptation has a short-term effect on the accommodative system to elevate instability of the accommodative response. Mechanisms underlying the increase in accommodative variability by blur adaptation and possible influences of the accommodation stability on myopia development were discussed.
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 21-03-2016
DOI: 10.1167/16.5.16
Abstract: Perceived stereoscopic slant around a vertical axis is strongly underestimated for isolated surfaces, suggesting that neither uniocular image compression nor linear gradients of absolute disparity are very effective cues. However, slant increases to a level close to geometric prediction if gradients of relative disparity are introduced, for ex le by placing flanking frontal-parallel surfaces at the horizontal boundaries of the slanted surface. Here we examine the mechanisms underlying this slant enhancement by manipulating properties of the slanted surface or the flanking surfaces. Perceived slant was measured using a probe bias method. In Experiment 1, an outlined surface and a randomly textured surface showed similar slant underestimation when presented in isolation, but the enhancement in slant produced by flankers was significantly greater for the textured surface. In Experiment 2, we degraded the relative disparity gradient by (a) reducing overall texture density, (b) reducing flanker width, or (c) adding disparity noise to the flankers. Density had no effect while adding noise to the flankers, or reducing their width significantly decreased perceived slant of the central surface. These results support the view that the enhancement of slant produced by adding flanking surfaces is attributable to the presence of a relative disparity gradient and that the flanker effect can spread to regions of the surface not directly above or below the gradient.
Publisher: Oxford University Press (OUP)
Date: 21-04-2022
DOI: 10.1093/SCAN/NSAC031
Abstract: Face detection is a foundational social skill for primates. This vital function is thought to be supported by specialized neural mechanisms however, although several face-selective regions have been identified in both humans and nonhuman primates, there is no consensus about which region(s) are involved in face detection. Here, we used naturally occurring errors of face detection (i.e. objects with illusory facial features referred to as ex les of ‘face pareidolia’) to identify regions of the macaque brain implicated in face detection. Using whole-brain functional magnetic resonance imaging to test awake rhesus macaques, we discovered that a subset of face-selective patches in the inferior temporal cortex, on the lower lateral edge of the superior temporal sulcus, and the amygdala respond more to objects with illusory facial features than matched non-face objects. Multivariate analyses of the data revealed differences in the representation of illusory faces across the functionally defined regions of interest. These differences suggest that the cortical and subcortical face-selective regions contribute uniquely to the detection of facial features. We conclude that face detection is supported by a multiplexed system in the primate brain.
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 09-2015
DOI: 10.1167/15.12.993
Publisher: Society for Neuroscience
Date: 21-12-2016
DOI: 10.1523/JNEUROSCI.2690-16.2016
Abstract: Multivariate pattern analysis is a powerful technique however, a significant theoretical limitation in neuroscience is the ambiguity in interpreting the source of decodable information used by classifiers. This is exemplified by the continued controversy over the source of orientation decoding from fMRI responses in human V1. Recently Carlson (2014) identified a potential source of decodable information by modeling voxel responses based on the Hubel and Wiesel (1972) ice-cube model of visual cortex. The model revealed that activity associated with the edges of gratings covaries with orientation and could potentially be used to discriminate orientation. Here we empirically evaluate whether “edge-related activity” underlies orientation decoding from patterns of BOLD response in human V1. First, we systematically mapped classifier performance as a function of stimulus location using population receptive field modeling to isolate each voxel's overlap with a large annular grating stimulus. Orientation was decodable across the stimulus however, peak decoding performance occurred for voxels with receptive fields closer to the fovea and overlapping with the inner edge. Critically, we did not observe the expected second peak in decoding performance at the outer stimulus edge as predicted by the edge account. Second, we evaluated whether voxels that contribute most to classifier performance have receptive fields that cluster in cortical regions corresponding to the retinotopic location of the stimulus edge. Instead, we find the distribution of highly weighted voxels to be approximately random, with a modest bias toward more foveal voxels. Our results demonstrate that edge-related activity is likely not necessary for orientation decoding. SIGNIFICANCE STATEMENT A significant theoretical limitation of multivariate pattern analysis in neuroscience is the ambiguity in interpreting the source of decodable information used by classifiers. For ex le, orientation can be decoded from BOLD activation patterns in human V1, even though orientation columns are at a finer spatial scale than 3T fMRI. Consequently, the source of decodable information remains controversial. Here we test the proposal that information related to the stimulus edges underlies orientation decoding. We map voxel population receptive fields in V1 and evaluate orientation decoding performance as a function of stimulus location in retinotopic cortex. We find orientation is decodable from voxels whose receptive fields do not overlap with the stimulus edges, suggesting edge-related activity does not substantially drive orientation decoding.
Publisher: Elsevier BV
Date: 12-2020
Publisher: Society for Neuroscience
Date: 22-07-2022
Publisher: Frontiers Media SA
Date: 2013
Publisher: SAGE Publications
Date: 2014
DOI: 10.1068/P7619
Abstract: Subjective contours are widely considered to be an aspect of the perception of occlusion, but considerations of occlusion do not always drive predictions of their strength. Occluding surfaces have no necessary relationship to the contours they occlude, yet it is commonly predicted that subjective contours will be strongest for inducer alignments that are orthogonal to inducer orientations. In several papers we have proposed that a lack of relationship between inducers and their alignment promotes seeing subjective contours. We explore this further here using horizontal or near-horizontal thin-line inducers arranged vertically with linearly aligned terminations along central gaps. Subjective contour strength was measured using the method of paired comparison in two experiments. The weakest subjective contours were found when the gap was orthogonal to the inducers and parallel to the outer edges of the line set. Subjective contours were strengthened by orientation contrast, defined either as a nonorthogonal relationship between the gap and the inducers or as nonparallelism between the gap and the outer alignments of the inducers. The effect was replicated at both high and low line densities. We also confirmed a strong effect of high inducer entropy ( variations in inducer orientation and separation) with thin-line inducers. The results support the view that the lack of a relationship of alignments to what is aligned is a major determinant of subjective contour strength.
Publisher: American Psychological Association (APA)
Date: 2010
DOI: 10.1037/A0018433
Abstract: In 3 experiments, we examined Perruchet, Cleeremans, and Destrebecqz's (2006) double dissociation of cued reaction time (RT) and target expectancy. In this design, participants receive a tone on every trial and are required to respond as quickly as possible to a square presented on 50% of those trials (a partial reinforcement schedule). Participants are faster to respond to the square following many recent tone-square pairings and slower to respond following many tone-alone presentations. Of importance, expectancy of the square is highest when performance on the RT task is poorest-following many tone-alone trials. This finding suggests that RT performance is determined by the strength of a tone-square link and that this link is the product of a non-expectancy-based learning mechanism. The present experiments, however, provide evidence that the speeded RTs are not the consequence of the strengthening and weakening of a tone-square link. Thus, the RT Perruchet effect does not provide evidence for a non-expectancy-based link-formation mechanism.
Publisher: Springer Science and Business Media LLC
Date: 09-09-2020
DOI: 10.1038/S41467-020-18325-8
Abstract: The human brain is specialized for face processing, yet we sometimes perceive illusory faces in objects. It is unknown whether these natural errors of face detection originate from a rapid process based on visual features or from a slower, cognitive re-interpretation. Here we use a multifaceted approach to understand both the spatial distribution and temporal dynamics of illusory face representation in the brain by combining functional magnetic resonance imaging and magnetoencephalography neuroimaging data with model-based analysis. We find that the representation of illusory faces is confined to occipital-temporal face-selective visual cortex. The temporal dynamics reveal a striking evolution in how illusory faces are represented relative to human faces and matched objects. Illusory faces are initially represented more similarly to real faces than matched objects are, but within ~250 ms, the representation transforms, and they become equivalent to ordinary objects. This is consistent with the initial recruitment of a broadly-tuned face detection mechanism which privileges sensitivity over selectivity.
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 31-08-2017
DOI: 10.1167/17.10.845
Publisher: Center for Open Science
Date: 25-03-2022
Abstract: Face pareidolia is the experience of seeing illusory faces in inanimate objects. While children experience face pareidolia, it is unknown whether they perceive gender in illusory faces, as their face evaluation system is still developing in the first decade of life. In a s le of 412 children and adults from 4 to 80 years of age we found that like adults, children perceived many illusory faces in objects to have a gender and had a strong bias to see them as male rather than female, regardless of their own gender identification. These results provide evidence that the male bias for face pareidolia emerges early in life, even before the ability to discriminate gender from facial cues alone is fully developed. Further, the existence of a male bias in children suggests that any social context that elicits the cognitive bias to see faces as male has remained relatively consistent across generations.
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 09-2015
DOI: 10.1167/15.12.1252
Publisher: American Psychological Association (APA)
Date: 2013
DOI: 10.1037/A0032315
Abstract: Monocular regions that occur with binocular viewing of natural scenes can produce a strong perception of depth--"da Vinci stereopsis." They occur either when part of the background is occluded in one eye, or when a nearer object is camouflaged against a background surface in one eye's view. There has been some controversy over whether da Vinci depth is constrained by geometric or ecological factors. Here we show that the color of the monocular region constrains the depth perceived from camouflage, but not occlusion, as predicted by ecological considerations. Quantitative depth was found in both cases, but for camouflage only when the color of the monocular region matched the binocular background. Unlike previous reports, depth failed even when nonmatching colors satisfied conditions for perceptual transparency. We show that placing a colored line at the boundary between the binocular and monocular regions is sufficient to eliminate depth from camouflage. When both the background and the monocular region contained vertical contours that could be fused, some observers appeared to use fusion, and others da Vinci constraints, supporting the existence of a separate da Vinci mechanism. The results show that da Vinci stereopsis incorporates color constraints and is more complex than previously assumed.
Publisher: Springer Science and Business Media LLC
Date: 18-04-2018
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 08-06-2012
DOI: 10.1167/12.6.12
Publisher: MIT Press
Date: 12-2017
DOI: 10.1162/JOCN_A_01177
Abstract: Animacy is a robust organizing principle among object category representations in the human brain. Using multivariate pattern analysis methods, it has been shown that distance to the decision boundary of a classifier trained to discriminate neural activation patterns for animate and inanimate objects correlates with observer RTs for the same animacy categorization task [Ritchie, J. B., Tovar, D. A., & Carlson, T. A. Emerging object representations in the visual system predict reaction times for categorization. PLoS Computational Biology, 11, e1004316, 2015 Carlson, T. A., Ritchie, J. B., Kriegeskorte, N., Durvasula, S., & Ma, J. Reaction time for object categorization is predicted by representational distance. Journal of Cognitive Neuroscience, 26, 132–142, 2014]. Using MEG decoding, we tested if the same relationship holds when a stimulus manipulation (degradation) increases task difficulty, which we predicted would systematically decrease the distance of activation patterns from the decision boundary and increase RTs. In addition, we tested whether distance to the classifier boundary correlates with drift rates in the linear ballistic accumulator [Brown, S. D., & Heathcote, A. The simplest complete model of choice response time: Linear ballistic accumulation. Cognitive Psychology, 57, 153–178, 2008]. We found that distance to the classifier boundary correlated with RT, accuracy, and drift rates in an animacy categorization task. Split by animacy, the correlations between brain and behavior were sustained longer over the time course for animate than for inanimate stimuli. Interestingly, when examining the distance to the classifier boundary during the peak correlation between brain and behavior, we found that only degraded versions of animate, but not inanimate, objects had systematically shifted toward the classifier decision boundary as predicted. Our results support an asymmetry in the representation of animate and inanimate object categories in the human brain.
Publisher: Pion Ltd
Date: 04-2015
DOI: 10.1068/I0723SAS
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 10-01-2013
DOI: 10.1167/13.1.17
Abstract: Motion in depth can be perceived from binocular cues alone, yet it is unclear whether these cues support speed sensitivity in the absence of the monocular cues that normally co-occur in natural viewing. We measure threshold contours in space-time for the discrimination of three-dimensional (3D) motion to determine whether observers use speed to discriminate a test 3D motion from two identical standards. We compare thresholds for random-dot stereograms (RDS) containing both binocular cues to 3D motion-interocular velocity difference and changing disparity over time-with performance for dynamic random-dot stereograms (DRDS), which contain only the second cue. Threshold contours are tilted along the axis of constant velocity in space-time for RDS stimuli at slow speeds (0.5 m/s), evidence for speed sensitivity. However, for higher speeds (1.5 m/s) and DRDS stimuli, observers rely on the component cues of duration and disparity. In a second experiment, noise of constant velocity is added to the standards to degrade the reliability of these separate components. Again there is evidence for speed tuning for RDS, but not for DRDS. Considerable variation is observed in the ability of in idual observers to use the different cues in both experiments, however, in general the results emphasize the importance of interocular velocity difference as a critical cue for speed sensitivity to motion in depth, and suggest that speed sensitivity to stereomotion from binocular cues is restricted to relatively slow speeds.
Publisher: Society for Neuroscience
Date: 05-11-2014
Publisher: Springer Science and Business Media LLC
Date: 19-06-2019
DOI: 10.3758/S13414-019-01782-9
Abstract: The human visual system is capable of processing an enormous amount of information in a short time. Although rapid target detection has been explored extensively, less is known about target localization. Here we used natural scenes and explored the relationship between being able to detect a target (present vs. absent) and being able to localize it. Across four presentation durations (~ 33-199 ms), participants viewed scenes taken from two superordinate categories (natural and manmade), each containing exemplars from four basic scene categories. In a two-interval forced choice task, observers were asked to detect a Gabor target inserted in one of the two scenes. This was followed by one of two different localization tasks. Participants were asked either to discriminate whether the target was on the left or the right side of the display or to click on the exact location where they had seen the target. Targets could be detected and localized at our shortest exposure duration (~ 33 ms), with a predictable improvement in performance with increasing exposure duration. We saw some evidence at this shortest duration of detection without localization, but further analyses demonstrated that these trials typically reflected coarse or imprecise localization information, rather than its complete absence. Experiment 2 replicated our main findings while exploring the effect of the level of "openness" in the scene. Our results are consistent with the notion that when we are able to extract what objects are present in a scene, we also have information about where each object is, which provides crucial guidance for our goal-directed actions.
Publisher: Elsevier BV
Date: 05-2016
DOI: 10.1016/J.NEUROIMAGE.2016.02.019
Abstract: Perceptual similarity is a cognitive judgment that represents the end-stage of a complex cascade of hierarchical processing throughout visual cortex. Previous studies have shown a correspondence between the similarity of coarse-scale fMRI activation patterns and the perceived similarity of visual stimuli, suggesting that visual objects that appear similar also share similar underlying patterns of neural activation. Here we explore the temporal relationship between the human brain's time-varying representation of visual patterns and behavioral judgments of perceptual similarity. The visual stimuli were abstract patterns constructed from identical perceptual units (oriented Gabor patches) so that each pattern had a unique global form or perceptual 'Gestalt'. The visual stimuli were decodable from evoked neural activation patterns measured with magnetoencephalography (MEG), however, stimuli differed in the similarity of their neural representation as estimated by differences in decodability. Early after stimulus onset (from 50ms), a model based on retinotopic organization predicted the representational similarity of the visual stimuli. Following the peak correlation between the retinotopic model and neural data at 80ms, the neural representations quickly evolved so that retinotopy no longer provided a sufficient account of the brain's time-varying representation of the stimuli. Overall the strongest predictor of the brain's representation was a model based on human judgments of perceptual similarity, which reached the limits of the maximum correlation with the neural data defined by the 'noise ceiling'. Our results show that large-scale brain activation patterns contain a neural signature for the perceptual Gestalt of composite visual features, and demonstrate a strong correspondence between perception and complex patterns of brain activity.
Start Date: 06-2018
End Date: 06-2018
Amount: $393,996.00
Funder: Australian Research Council
View Funded Activity