ORCID Profile
0000-0002-6408-0359
Current Organisation
University of Queensland
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
In Research Link Australia (RLA), "Research Topics" refer to ANZSRC FOR and SEO codes. These topics are either sourced from ANZSRC FOR and SEO codes listed in researchers' related grants or generated by a large language model (LLM) based on their publications.
Sensory Processes, Perception and Performance | Psychology | Sensory Systems
Publisher: Elsevier BV
Date: 05-2013
DOI: 10.1016/J.CUB.2013.03.050
Abstract: When we move our eyes, images of objects are displaced on the retina, yet the visual world appears stable. Oculomotor activity just prior to an eye movement contributes to perceptual stability by providing information about the predicted location of a relevant object on the retina following a saccade. It remains unclear, however, whether an object's features are represented at the remapped location. Here, we exploited the phenomenon of visual crowding to show that presaccadic remapping preserves the elementary features of objects at their predicted postsaccadic locations. Observers executed an eye movement and identified a letter probe flashed just before the saccade. Flanking stimuli were flashed around the location that would be occupied by the probe immediately following the saccade. Despite being positioned in the opposite visual field to the probe, these flankers disrupted observers' ability to identify the probe. Crucially, this "remapped crowding" interference was stronger when the flankers were visually similar to the probe than when the flanker and probe stimuli were distinct. Our findings suggest that visual processing at remapped locations is featurally dependent, providing a mechanism for achieving perceptual continuity of objects across saccades.
Publisher: Cold Spring Harbor Laboratory
Date: 18-09-2023
Publisher: SAGE Publications
Date: 16-03-2022
DOI: 10.1177/03010066221083397
Abstract: The THINGS database is a freely available stimulus set that has the potential to facilitate the generation of theory that bridges multiple areas within cognitive neuroscience. The database consists of 26,107 high quality digital photos that are sorted into 1,854 concepts. While a valuable resource, relatively few technical details relevant to the design of studies in cognitive neuroscience have been described. We present an analysis of two key low-level properties of THINGS images, luminance and luminance contrast. These image statistics are known to influence common physiological and neural correlates of perceptual and cognitive processes. In general, we found that the distributions of luminance and contrast are in close agreement with the statistics of natural images reported previously. However, we found that image concepts are separable in their luminance and contrast: we show that luminance and contrast alone are sufficient to classify images into their concepts with above chance accuracy. We describe how these factors may confound studies using the THINGS images, and suggest simple controls that can be implemented a priori or post-hoc. We discuss the importance of using such natural images as stimuli in psychological research.
Publisher: Springer Science and Business Media LLC
Date: 05-04-2017
DOI: 10.1038/SREP45551
Abstract: Although we perceive a richly detailed visual world, our ability to identify in idual objects is severely limited in clutter, particularly in peripheral vision. Models of such “crowding” have generally been driven by the phenomenological misidentifications of crowded targets: using stimuli that do not easily combine to form a unique symbol (e.g. letters or objects), observers typically confuse the source of objects and report either the target or a distractor, but when continuous features are used (e.g. orientated gratings or line positions) observers report a feature somewhere between the target and distractor. To reconcile these accounts, we develop a hybrid method of adjustment that allows detailed analysis of these multiple error categories. Observers reported the orientation of a target, under several distractor conditions, by adjusting an identical foveal target. We apply new modelling to quantify whether perceptual reports show evidence of positional uncertainty, source confusion, and featural averaging on a trial-by-trial basis. Our results show that observers make a large proportion of source-confusion errors. However, our study also reveals the distribution of perceptual reports that underlie performance in this crowding task more generally: aggregate errors cannot be neatly labelled because they are heterogeneous and their structure depends on target-distractor distance.
Publisher: Springer Science and Business Media LLC
Date: 18-02-2019
DOI: 10.1038/S41598-018-37084-7
Abstract: The visual system is required to compute objects from partial image structure so that figures can be segmented from their backgrounds. Although early clinical, behavioral, and modeling data suggested that such computations are performed pre-attentively, recent neurophysiological evidence suggests that surface filling-in is influenced by attention. In the present study we developed a variant of the classical Kanizsa illusory triangle to investigate whether voluntary attention modulates perceptual filling-in. Our figure consists of “pacmen” positioned at the tips of an illusory 6-point star and alternating in polarity such that two illusory triangles are implied to compete with one another within the figure. On each trial, observers were cued to attend to only one triangle, and then compared its lightness with a matching texture-defined triangle. We found that perceived lightness of the illusory shape depended on the polarity of pacmen framing the attended triangle. Our findings thus reveal that, for overlapping illusory surfaces, lightness judgements can depend on voluntary attention. Our novel stimulus may prove useful in future attempts to link neurophysiological effects to phenomenology.
Publisher: Elsevier BV
Date: 2022
DOI: 10.1016/J.CORTEX.2021.10.008
Abstract: A person's ability to recognise familiar faces is critical to their participation in many aspects of society. Following an acquired brain injury or retinal disease, however, faces can appear distorted, a phenomenon known as prosopometamorphopsia. Although case reports have described a variety of changes in the appearance of faces during prosopometamorphopsia, the influence of the disorder on face recognition has not been rigorously investigated. In the present report, we quantify how well healthy observers can recognise familiar faces that have been distorted using a parametric model of prosopometamorphopsia. Our results reveal that face recognition varies systematically with the parameters of visual distortion, which, importantly, interact with the size of the face in a nonlinear but highly predictable manner. Our findings demonstrate that prosopometamorphopsia can lead to a surprising range of changes in the appearance of faces. The impact of visual distortion on face recognition thus depends critically on the distance at which the face is viewed, which is likely to change across social and clinical contexts.
Publisher: Cold Spring Harbor Laboratory
Date: 08-11-2018
DOI: 10.1101/216341
Abstract: The sensory recruitment hypothesis states that visual short term memory is maintained in the same visual cortical areas that initially encode a stimulus’ features. Although it is well established that the distance between features in visual cortex determines their visibility, a limitation known as crowding, it is unknown whether short term memory is similarly constrained by the cortical spacing of memory items. Here we investigated whether the cortical spacing between sequentially presented memoranda affects the fidelity of memory in humans (of both sexes). In a first experiment, we varied cortical spacing by taking advantage of the log-scaling of visual cortex with eccentricity, sequentially presenting memoranda in peripheral vision along either the radial or tangential visual axis with respect to the fovea. In a second experiment, we sequentially presented memoranda either within or beyond the critical spacing of visual crowding, a distance within which visual features cannot be perceptually distinguished due to their nearby cortical representations. In both experiments and across multiple measures, we found strong evidence that the ability to maintain visual features in memory is unaffected by cortical spacing. These results indicate that the neural architecture underpinning working memory has properties inconsistent with the known behaviour of sensory neurons in visual cortex. Instead, the dissociation between perceptual and memory representations supports a role of higher cortical areas, such as posterior parietal or prefrontal regions, or may involve an as yet unspecified mechanism in visual cortex in which stimulus features are bound to their temporal order. Although much is known about the resolution with which we can remember visual objects, the cortical representation of items held in short term memory remains contentious. A popular hypothesis suggests that memory of visual features is maintained via the recruitment of the same neural architecture in sensory cortex that encodes stimuli. We investigated this claim by manipulating the spacing in visual cortex between sequentially presented memoranda such that some items shared cortical representations more than others, while preventing perceptual interference between stimuli. We found clear evidence that short term memory is independent of the intra-cortical spacing of memoranda, revealing a dissociation between perceptual and memory representations. Our data indicate that working memory relies on different neural mechanisms from sensory perception.
Publisher: Springer Science and Business Media LLC
Date: 18-08-2016
DOI: 10.1038/SREP31861
Abstract: Most eye movements in the real-world redirect the foveae to objects at a new depth and thus require the co-ordination of monocular saccade litudes and binocular vergence eye movements. Additionally to maintain the accuracy of these oculomotor control processes across the lifespan, ongoing calibration is required to compensate for errors in foveal landing positions. Such oculomotor plasticity has generally been studied under conditions in which both eyes receive a common error signal, which cannot resolve the long-standing debate regarding whether both eyes are innervated by a common cortical signal or by a separate signal for each eye. Here we examine oculomotor plasticity when error signals are independently manipulated in each eye, which can occur naturally owing to aging changes in each eye’s orbit and extra-ocular muscles, or in oculomotor dysfunctions. We find that both rapid saccades and slow vergence eye movements are continuously recalibrated independently of one another and corrections can occur in opposite directions in each eye. Whereas existing models assume a single cortical representation of space employed for the control of both eyes, our findings provide evidence for independent monoculomotor and binoculomotor plasticities and dissociable spatial mapping for each eye.
Publisher: Springer Science and Business Media LLC
Date: 09-2023
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 22-01-2014
DOI: 10.1167/14.1.21
Abstract: Humans make smooth pursuit eye movements to foveate moving objects of interest. It is known that smooth pursuit alters visual processing, but there is currently no consensus on whether changes in vision are contingent on the direction the eyes are moving. We recently showed that visual crowding can be used as a sensitive measure of changes in visual processing, resulting from involvement of the saccadic eye movement system. The present paper extends these results by examining the effect of smooth pursuit eye movements on the spatial extent of visual crowding-the area over which visual stimuli are integrated. We found systematic changes in crowding that depended on the direction of pursuit and the distance of stimuli from the pursuit target. Relative to when no eye movement was made, the spatial extent of crowding increased for objects located contraversive to the direction of pursuit at an eccentricity of approximately 3°. By contrast, crowding for objects located ipsiversive to the direction of pursuit remained unchanged. There was no change in crowding during smooth pursuit for objects located approximately 7° from the fovea. The increased size of the crowding zone for the contraversive direction may be related to the distance that the fovea lags behind the pursuit target during smooth eye movements. Overall, our results reveal that visual perception is altered dynamically according to the intended destination of oculomotor commands.
Publisher: Springer Science and Business Media LLC
Date: 16-04-2021
DOI: 10.3758/S13414-021-02245-W
Abstract: Spatial location is believed to have a privileged role in binding features held in visual working memory. Supporting this view, Pertzov and Husain ( Attention, Perception, & Psychophysics , 76 (7), 1914–1924, 2014) reported that recall of bindings between visual features was selectively impaired when items were presented sequentially at the same location compared to sequentially at different locations. We replicated their experiment, but additionally tested whether the observed impairment could be explained by perceptual interference during encoding. Participants viewed four oriented bars in highly discriminable colors presented sequentially either at the same or different locations, and after a brief delay were cued with one color to reproduce the associated orientation. When we used the same timing as the original study, we reproduced its key finding of impaired binding memory in the same-location condition. Critically, however, this effect was significantly modulated by the duration of the inter-stimulus interval, and disappeared if memoranda were presented with longer delays between them. In a second experiment, we tested whether the effect generalized to other visual features, namely reporting of colors cued by stimulus shape. While we found performance deficits in the same-location condition, these did not selectively affect binding memory. We argue that the observed effects are best explained by encoding interference, and that memory for feature binding is not necessarily impaired when memoranda share the same location.
Publisher: Cold Spring Harbor Laboratory
Date: 17-06-2021
DOI: 10.1101/2021.06.16.448761
Abstract: The sensitivity of the human visual system is thought to be shaped by environmental statistics. A major endeavour in vision science, therefore, is to uncover the image statistics that predict perceptual and cognitive function. When searching for targets in natural images, for ex le, it has recently been proposed that target detection is inversely related to the spatial similarity of the target to its local background. We tested this hypothesis by measuring observers’ sensitivity to targets that were blended with natural image backgrounds. Targets were designed to have a spatial structure that was either similar or dissimilar to the background. Contrary to masking from similarity, we found that observers were most sensitive to targets that were most similar to their backgrounds. We hypothesised that a coincidence of phase-alignment between target and background results in a local contrast signal that facilitates detection when target-background similarity is high. We confirmed this prediction in a second experiment. Indeed, we show that, by solely manipulating the phase of a target relative to its background, the target can be rendered easily visible or undetectable. Our study thus reveals that, in addition to its structural similarity, the phase of the target relative to the background must be considered when predicting detection sensitivity in natural images.
Publisher: Society for Neuroscience
Date: 13-02-2013
DOI: 10.1523/JNEUROSCI.4172-12.2013
Abstract: Our ability to recognize objects in peripheral vision is impaired when other objects are nearby (Bouma, 1970). This phenomenon, known as crowding, is often linked to interactions in early visual processing that depend primarily on the retinal position of visual stimuli (Pelli, 2008 Pelli and Tillman, 2008). Here we tested a new account that suggests crowding is influenced by spatial information derived from an extraretinal signal involved in eye movement preparation. We had human observers execute eye movements to crowded targets and measured their ability to identify those targets just before the eyes began to move. Beginning ∼50 ms before a saccade toward a crowded object, we found that not only was there a dramatic reduction in the magnitude of crowding, but the spatial area within which crowding occurred was almost halved. These changes in crowding occurred despite no change in the retinal position of target or flanking stimuli. Contrary to the notion that crowding depends on retinal signals alone, our findings reveal an important role for eye movement signals. Eye movement preparation effectively enhances object discrimination in peripheral vision at the goal of the intended saccade. These presaccadic changes may enable enhanced recognition of visual objects in the periphery during active search of visually cluttered environments.
Publisher: Public Library of Science (PLoS)
Date: 21-09-2012
Publisher: SAGE Publications
Date: 03-2019
Abstract: I analyse the visibility of “groomed” ski runs under different lighting conditions. A model of human contrast sensitivity predicts that the spatial period of groomed snow may render it invisible in the shade or on overcast days. I confirm this prediction with visual demonstrations and make a suggestion to improve visibility.
Publisher: Elsevier BV
Date: 05-2016
Publisher: Society for Neuroscience
Date: 21-05-2014
DOI: 10.1523/JNEUROSCI.5252-13.2014
Abstract: The receptive fields of early visual neurons are anchored in retinotopic coordinates (Hubel and Wiesel, 1962). Eye movements shift these receptive fields and therefore require that different populations of neurons encode an object's constituent features across saccades. Whether feature groupings are preserved across successive fixations or processing starts anew with each fixation has been hotly debated (Melcher and Morrone, 2003 Melcher, 2005, 2010 Knapen et al., 2009 Cavanagh et al., 2010a,b Morris et al., 2010). Here we show that feature integration initially occurs within retinotopic coordinates, but is then conserved within a spatiotopic coordinate frame independent of where the features fall on the retinas. With human observers, we first found that the relative timing of visual features plays a critical role in determining the spatial area over which features are grouped. We exploited this temporal dependence of feature integration to show that features co-occurring within 45 ms remain grouped across eye movements. Our results thus challenge purely feedforward models of feature integration (Pelli, 2008 Freeman and Simoncelli, 2011) that begin de novo after every eye movement, and implicate the involvement of brain areas beyond early visual cortex. The strong temporal dependence we quantify and its link with trans-saccadic object perception instead suggest that feature integration depends, at least in part, on feedback from higher brain areas (Mumford, 1992 Rao and Ballard, 1999 Di Lollo et al., 2000 Moore and Armstrong, 2003 Stanford et al., 2010).
Publisher: Springer Science and Business Media LLC
Date: 13-02-2019
DOI: 10.3758/S13414-019-01678-8
Abstract: The extent to which visual inference is shaped by attentional goals is unclear. Voluntary attention may simply modulate the priority with which information is accessed by the higher cognitive functions involved in perceptual decision making. Alternatively, voluntary attention may influence fundamental visual processes, such as those involved in segmenting an incoming retinal signal into a structured scene of coherent objects, thereby determining perceptual organization. Here we tested whether the segmentation and integration of visual form can be determined by an observer's goals, by exploiting a novel variant of the classical Kanizsa figure. We generated predictions about the influence of attention with a machine classifier and tested these predictions with a psychophysical response classification technique. Despite seeing the same image on each trial, observers' perception of illusory spatial structure depended on their attentional goals. These attention-contingent illusory contours directly conflicted with other, equally plausible visual forms implied by the geometry of the stimulus, revealing that attentional selection can determine the perceived layout of a fragmented scene. Attentional goals, therefore, not only select precomputed features or regions of space for prioritized processing, but under certain conditions also greatly influence perceptual organization, and thus visual appearance.
Publisher: American Physiological Society
Date: 05-2019
Abstract: Discerning objects from their surrounds (i.e., figure-ground segmentation) in a way that guides adaptive behaviors is a fundamental task of the brain. Neurophysiological work has revealed a class of cells in the macaque visual cortex that may be ideally suited to support this neural computation: border ownership cells (Zhou H, Friedman HS, von der Heydt R. J Neurosci 20: 6594–6611, 2000). These orientation-tuned cells appear to respond conditionally to the borders of objects. A behavioral correlate supporting the existence of these cells in humans was demonstrated with two-dimensional luminance-defined objects (von der Heydt R, Macuda T, Qiu FT. J Opt Soc Am A Opt Image Sci Vis 22: 2222–2229, 2005). However, objects in our natural visual environments are often signaled by complex cues, such as motion and binocular disparity. Thus for border ownership systems to effectively support figure-ground segmentation and object depth ordering, they must have access to information from multiple depth cues with strict depth order selectivity. Here we measured in humans (of both sexes) border ownership-dependent tilt aftereffects after adaptation to figures defined by either motion parallax or binocular disparity. We find that both depth cues produce a tilt aftereffect that is selective for figure-ground depth order. Furthermore, we find that the effects of adaptation are transferable between cues, suggesting that these systems may combine depth cues to reduce uncertainty (Bülthoff HH, Mallot HA. J Opt Soc Am A 5: 1749–1758, 1988). These results suggest that border ownership mechanisms have strict depth order selectivity and access to multiple depth cues that are jointly encoded, providing compelling psychophysical support for their role in figure-ground segmentation in natural visual environments. NEW & NOTEWORTHY Figure-ground segmentation is a critical function that may be supported by “border ownership” neural systems that conditionally respond to object borders. We measured border ownership-dependent tilt aftereffects to figures defined by motion parallax or binocular disparity and found aftereffects for both cues. These effects were transferable between cues but selective for figure-ground depth order, suggesting that the neural systems supporting figure-ground segmentation have strict depth order selectivity and access to multiple depth cues that are jointly encoded.
Publisher: Elsevier BV
Date: 10-2015
DOI: 10.1016/J.NEUROPSYCHOLOGIA.2015.08.001
Abstract: Frontal dynamic aphasia is characterised by a profound reduction in spontaneous speech despite well-preserved naming, repetition and comprehension. Since Luria (1966, 1970) designated this term, two main forms of dynamic aphasia have been identified: one, a language-specific selection deficit at the level of word/sentence generation, associated with left inferior frontal lesions and two, a domain-general impairment in generating multiple responses or connected speech, associated with more extensive bilateral frontal and/or frontostriatal damage. Both forms of dynamic aphasia have been interpreted as arising due to disturbances in early prelinguistic conceptual preparation mechanisms that are critical for language production. We investigate language-specific and domain-general accounts of dynamic aphasia and address two issues: one, whether deficits in multiple conceptual preparation mechanisms can co-occur and two, the contribution of broader cognitive processes such as energization, the ability to initiate and sustain response generation over time, to language generation failure. Thus, we report patient WAL who presented with frontal dynamic aphasia in the context of progressive supranuclear palsy (PSP). WAL was given a series of experimental tests that showed that his dynamic aphasia was not underpinned by a language-specific deficit in selection or in microplanning. By contrast, WAL presented with a domain-general deficit in fluent sequencing of novel thoughts. The latter replicated the pattern documented in a previous PSP patient (Robinson, et al., 2006) however, unique to WAL, generating novel thoughts was impaired but there was no evidence of a sequencing deficit because perseveration was absent. Thus, WAL is the first unequivocal case to show a distinction between novel thought generation and subsequent fluent sequencing. Moreover, WAL's generation deficit encompassed verbal and non-verbal responses, showing a similar (but more profoundly reduced) pattern of performance to frontal patients with an energization deficit. In addition to impaired generation of novel thoughts, WAL presented with a concurrent strategy generation deficit, both falling within the second form of dynamic aphasia comprised of domain-general conceptual preparation mechanisms. Thus, within this second form of dynamic aphasia, concurrent deficits can co-occur. Overall, WAL presented with the second form of dynamic aphasia and was impaired in the generation of novel thoughts and internally-generated strategies, in the context of PSP and bilateral frontostriatal damage.
Publisher: Cold Spring Harbor Laboratory
Date: 17-09-2022
DOI: 10.1101/2022.09.15.507892
Abstract: Humans have well-documented priors for many features present in nature that guide visual perception. Despite being putatively grounded in the statistical regularities of the environment, scene priors are frequently violated due to the inherent variability of visual features from one scene to the next. However, these repeated violations do not appreciably challenge visuo-cognitive function, necessitating the broad use of priors in conjunction with context-specific information. We investigated the trade-off between participants’ internal expectations formed from both longer-term priors and those formed from immediate contextual information using a perceptual inference task and naturalistic stimuli. Notably, our task required participants to make perceptual inferences about naturalistic images using their own internal criteria, rather than making comparative judgements. Nonetheless, we show that observers’ performance is well approximated by a model that makes inferences using a prior for low-level image statistics, aggregated over many images. We further show that the dependence on this prior is rapidly re-weighted against contextual information, whether relevant or irrelevant. Our results therefore provide insight into how apparent high-level interpretations of scene appearances follow from the most basic of perceptual processes, which are grounded in the statistics of natural images.
Publisher: Society for Neuroscience
Date: 19-02-2018
DOI: 10.1523/JNEUROSCI.2645-17.2017
Abstract: The sensory recruitment hypothesis states that visual short-term memory is maintained in the same visual cortical areas that initially encode a stimulus' features. Although it is well established that the distance between features in visual cortex determines their visibility, a limitation known as crowding, it is unknown whether short-term memory is similarly constrained by the cortical spacing of memory items. Here, we investigated whether the cortical spacing between sequentially presented memoranda affects the fidelity of memory in humans (of both sexes). In a first experiment, we varied cortical spacing by taking advantage of the log-scaling of visual cortex with eccentricity, presenting memoranda in peripheral vision sequentially along either the radial or tangential visual axis with respect to the fovea. In a second experiment, we presented memoranda sequentially either within or beyond the critical spacing of visual crowding, a distance within which visual features cannot be perceptually distinguished due to their nearby cortical representations. In both experiments and across multiple measures, we found strong evidence that the ability to maintain visual features in memory is unaffected by cortical spacing. These results indicate that the neural architecture underpinning working memory has properties inconsistent with the known behavior of sensory neurons in visual cortex. Instead, the dissociation between perceptual and memory representations supports a role of higher cortical areas such as posterior parietal or prefrontal regions or may involve an as yet unspecified mechanism in visual cortex in which stimulus features are bound to their temporal order. SIGNIFICANCE STATEMENT Although much is known about the resolution with which we can remember visual objects, the cortical representation of items held in short-term memory remains contentious. A popular hypothesis suggests that memory of visual features is maintained via the recruitment of the same neural architecture in sensory cortex that encodes stimuli. We investigated this claim by manipulating the spacing in visual cortex between sequentially presented memoranda such that some items shared cortical representations more than others while preventing perceptual interference between stimuli. We found clear evidence that short-term memory is independent of the intracortical spacing of memoranda, revealing a dissociation between perceptual and memory representations. Our data indicate that working memory relies on different neural mechanisms from sensory perception.
Publisher: Elsevier BV
Date: 12-2015
Publisher: SAGE Publications
Date: 02-2010
Abstract: Objective: The aim of this study was to assess how background visual motion and the relative movement of sound affect a head-mounted display (HMD) wearer’s performance at a task requiring integration of auditory and visual information. Background: HMD users are often mobile. A commercially available speaker in a fixed location delivers auditory information affordably to the HMD user. However, previous research has shown that mobile HMD users perform poorly at tasks that require integration of visual and auditory information when sound comes from a free-field speaker. The specific cause of the poor task performance is unknown. Method: Participants counted audiovisual events that required integration of sounds delivered via a free-field speaker and vision on an HMD. Participants completed the task while either walking around a room, sitting in the room, or sitting inside a mobile room that allowed separate manipulation of background visual motion and speaker motion. Results: Participants’ accuracy at counting target audiovisual events was worse when participants were walking than when sitting at a desk, p = .032. Compared with when they were sitting at a desk, participants’ accuracy at counting target audiovisual events showed a trend to be worse when they experienced a combination of background visual motion and the relative movement of sound, p = .058. Conclusion: Multisensory integration performance is least effective when HMD users experience a combination of background visual motion and relative movement of sound. Eye reflexes may play an important role. Application: Results apply to situations in which HMD wearers are mobile when receiving multimodal information, as in health care and military contexts.
Publisher: Cold Spring Harbor Laboratory
Date: 09-07-2021
DOI: 10.1101/2021.07.08.451706
Abstract: The THINGS database is a freely available stimulus set that has the potential to facilitate the generation of theory that bridges multiple areas within cognitive neuroscience. The database consists of 26,107 high quality digital photos that are sorted into 1,854 concepts. While a valuable resource, relatively few technical details relevant to the design of studies in cognitive neuroscience have been described. We present an analysis of two key low-level properties of THINGS images, luminance and luminance contrast. These image statistics are known to influence common physiological and neural correlates of perceptual and cognitive processes. In general, we found that the distributions of luminance and contrast are in close agreement with the statistics of natural images reported previously. However, we found that image concepts are separable in their luminance and contrast: we show that luminance and contrast alone are sufficient to classify images into their concepts with above chance accuracy. We describe how these factors may confound studies using the THINGS images, and suggest simple controls that can be implemented a priori or post-hoc. We discuss the importance of using such natural images as stimuli in psychological research.
Publisher: Elsevier BV
Date: 12-2019
Location: United Kingdom of Great Britain and Northern Ireland
Start Date: 2015
End Date: 2019
Funder: National Health and Medical Research Council
View Funded ActivityStart Date: 2019
End Date: 2022
Funder: Australian Research Council
View Funded ActivityStart Date: 07-2019
End Date: 12-2024
Amount: $385,288.00
Funder: Australian Research Council
View Funded Activity