ORCID Profile
0000-0001-9573-8654
Current Organisation
The University of Auckland
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
Publisher: Cold Spring Harbor Laboratory
Date: 03-01-2023
DOI: 10.1101/2023.01.02.522484
Abstract: Shape perception is essential for numerous everyday behaviors from object recognition to grasping and handling objects. Yet how the brain encodes shape remains poorly understood. Here, we probed shape representations using visual aftereffects—perceptual distortions that occur following extended exposure to a stimulus—to resolve a long-standing debate about shape encoding. We implemented contrasting low-level and high-level computational models of neural adaptation, which made precise and distinct predictions about the illusory shape distortions the observers experience following adaptation. Directly pitting the predictions of the two models against one another revealed that the perceptual distortions are driven by high-level shape attributes derived from the statistics of natural shapes. Our findings suggest that the erse shape attributes thought to underlie shape encoding (e.g., curvature distributions, ‘skeletons’, aspect ratio) are the result of a visual system that learns to encode natural shape geometries based on observing many objects.
Publisher: Cold Spring Harbor Laboratory
Date: 09-12-2022
DOI: 10.1101/2022.12.09.519756
Abstract: When we look at an object, we simultaneously see how glossy or matte it is, how light or dark, and what color. Yet, at each point on the object’s surface, both diffuse and specular reflections are mixed in different proportions, resulting in substantial spatial chromatic and luminance variations. To further complicate matters, this pattern changes radically when the object is viewed under different lighting conditions. The purpose of this study was to simultaneously measure our ability to judge color and gloss using an image set capturing erse object and illuminant properties. Participants adjusted the hue, lightness, chroma, and specular reflectance of a reference object so that it appeared to be made of the same material as a test object. Critically, the two objects were presented under different lighting environments. We found that hue matches were highly accurate, except for under a chromatically atypical illuminant. Chroma and lightness constancy were generally poor, but these failures correlated well with simple image statistics. Gloss constancy was particularly poor, and these failures were only partially explained by reflection contrast. Importantly, across all measures, participants were highly consistent with one another in their deviations from constancy. Although color and gloss constancy hold well in simple conditions, the variety of lighting and shape in the real world presents significant challenges to our visual system’s ability to judge intrinsic material properties.
Publisher: No publisher found
Date: 2016
Publisher: Elsevier BV
Date: 05-2023
Publisher: SAGE Publications
Date: 17-03-2021
Abstract: One of the deepest insights in neuroscience is that sensory encoding should take advantage of statistical regularities. Humans’ visual experience contains many redundancies: Scenes mostly stay the same from moment to moment, and nearby image locations usually have similar colors. A visual system that knows which regularities shape natural images can exploit them to encode scenes compactly or guess what will happen next. Although these principles have been appreciated for more than 60 years, until recently it has been possible to convert them into explicit models only for the earliest stages of visual processing. But recent advances in unsupervised deep learning have changed that. Neural networks can be taught to compress images or make predictions in space or time. In the process, they learn the statistical regularities that structure images, which in turn often reflect physical objects and processes in the outside world. The astonishing accomplishments of unsupervised deep learning reaffirm the importance of learning statistical regularities for sensory coding and provide a coherent framework for how knowledge of the outside world gets into visual cortex.
Publisher: Elsevier BV
Date: 11-2022
DOI: 10.1016/J.CUB.2022.09.036
Abstract: The discovery of mental rotation was one of the most significant landmarks in experimental psychology, leading to the ongoing assumption that to visually compare objects from different three-dimensional viewpoints, we use explicit internal simulations of object rotations, to 'mentally adjust' one object until it matches the other
Publisher: Cold Spring Harbor Laboratory
Date: 08-05-2020
DOI: 10.1101/2020.05.07.082743
Abstract: Deep neural networks (DNNs) trained on object recognition provide the best current models of high-level visual areas in the brain. What remains unclear is how strongly network design choices, such as architecture, task training, and subsequent fitting to brain data contribute to the observed similarities. Here we compare a erse set of nine DNN architectures on their ability to explain the representational geometry of 62 isolated object images in human inferior temporal (hIT) cortex, as measured with functional magnetic resonance imaging. We compare untrained networks to their task-trained counterparts, and assess the effect of fitting them to hIT using a cross-validation procedure. To best explain hIT, we fit a weighted combination of the principal components of the features within each layer, and subsequently a weighted combination of layers. We test all models across all stages of training and fitting for their correlation with the hIT representational dissimilarity matrix (RDM) using an independent set of images and subjects. We find that trained models significantly outperform untrained models (accounting for 57% more of the explainable variance), suggesting that features representing natural images are important for explaining hIT. Model fitting further improves the alignment of DNN and hIT representations (by 124%), suggesting that the relative prevalence of different features in hIT does not readily emerge from the particular ImageNet object-recognition task used to train the networks. Finally, all DNN architectures tested achieved equivalent high performance once trained and fitted. Similar ability to explain hIT representations appears to be shared among deep feedforward hierarchies of nonlinear features with spatially restricted receptive fields.
Publisher: Springer Science and Business Media LLC
Date: 25-02-2021
DOI: 10.1038/S41597-021-00851-9
Abstract: A Correction to this paper has been published: 0.1038/s41597-021-00851-9.
Publisher: American Psychological Association (APA)
Date: 06-2013
DOI: 10.1037/A0032240
Abstract: One of the oldest known visual aftereffects is the shape aftereffect, wherein looking at a particular shape can make subsequent shapes seem distorted in the opposite direction. After viewing a narrow ellipse, for ex le, a perfect circle can look like a broad ellipse. It is thought that shape aftereffects are determined by the dimensions of successive retinal images. However, perceived shape is invariant for large retinal image changes resulting from different viewing angles current understanding suggests that shape aftereffects should not be impacted by the operations responsible for this viewpoint invariance. By viewing adaptors from an angle, with subsequent frontoparallel tests, we establish that shape aftereffects are not solely determined by the dimensions of successive retinal images. Moreover, by comparing performance with and without stereo surface slant cues, we show that shape aftereffects reflect a weighted function of retinal image shape and surface slant information, a hallmark of shape constancy operations. Thus our data establish that shape aftereffects can be influenced by perceived shape, as determined by constancy operations, and must therefore involve higher-level neural substrates than previously thought.
Publisher: Cold Spring Harbor Laboratory
Date: 07-04-2020
DOI: 10.1101/2020.04.07.026120
Abstract: Reflectance, lighting, and geometry combine in complex ways to create images. How do we disentangle these to perceive in idual properties, like surface glossiness? We suggest that brains disentangle properties by learning to model statistical structure in proximal images. To test this, we trained unsupervised generative neural networks on renderings of glossy surfaces and compared their representations with human gloss judgments. The networks spontaneously cluster images according to distal properties such as reflectance and illumination, despite receiving no explicit information about them. Intriguingly, the resulting representations also predict the specific patterns of ‘successes’ and ‘errors’ in human perception. Linearly decoding specular reflectance from the model’s internal code predicts human gloss perception better than ground truth, supervised networks, or control models, and predicts, on an image-by-image basis, illusions of gloss perception caused by interactions between material, shape, and lighting. Unsupervised learning may underlie many perceptual dimensions in vision, and beyond.
Publisher: Cold Spring Harbor Laboratory
Date: 25-03-2020
DOI: 10.1101/2020.03.23.003046
Abstract: An error was made in including noise ceilings for human data in Khaligh-Razavi and Kriegeskorte (2014). For comparability with the macaque data, human data were averaged across participants before analysis. Therefore the noise ceilings indicating variability across human participants do not accurately depict the upper bounds of possible model performance and should not have been shown. Creating noise ceilings appropriate for the fitted models is not trivial. Below we present a method for doing this, and the results obtained with this new method. The corrected results differ from the original results in that the best-performing model (weighted combination of AlexNet layers and category readouts) does not reach the lower bound of the noise ceiling. However, the best-performing model is not significantly below the lower bound of the noise ceiling. The claim that the model “fully explains” the human IT data appears overstated. All other claims of the paper are unaffected.
Publisher: American Psychological Association (APA)
Date: 2017
DOI: 10.1037/XHP0000292
Abstract: Adaptation to different visual properties can produce distinct patterns of perceptual aftereffect. Some, such as those following adaptation to color, seem to arise from recalibrative processes. These are associated with a reappraisal of which physical input constitutes a normative value in the environment-in this case, what appears "colorless," and what "colorful." Recalibrative aftereffects can arise from coding schemes in which inputs are referenced against malleable norm values. Other aftereffects seem to arise from contrastive processes. These exaggerate differences between the adaptor and other inputs without changing the adaptor's appearance. There has been conjecture over which process best describes adaptation-induced distortions of spatial vision, such as of apparent shape or facial identity. In 3 experiments, we determined whether recalibrative or contrastive processes underlie the shape aspect ratio aftereffect. We found that adapting to a moderately elongated shape compressed the appearance of narrower shapes and further elongated the appearance of more-elongated shapes (Experiment 1). Adaptation did not change the perceived aspect ratio of the adaptor itself (Experiment 2), and adapting to a circle induced similar bidirectional aftereffects on shapes narrower or wider than circular (Experiment 3). Results could not be explained by adaptation to retinotopically local edge orientation or single linear dimensions of shapes. We conclude that aspect ratio aftereffects are determined by contrastive processes that can exaggerate differences between successive inputs, inconsistent with a norm-referenced representation of aspect ratio. Adaptation might enhance the salience of novel stimuli rather than recalibrate one's sense of what constitutes a "normal" shape. (PsycINFO Database Record
Publisher: Society for Neuroscience
Date: 15-01-2021
DOI: 10.1523/JNEUROSCI.1449-20.2020
Abstract: Faces of different people elicit distinct fMRI patterns in several face-selective regions of the human brain. Here we used representational similarity analysis to investigate what type of identity-distinguishing information is encoded in three face-selective regions: fusiform face area (FFA), occipital face area (OFA), and posterior superior temporal sulcus (pSTS). In a s le of 30 human participants (22 females, 8 males), we used fMRI to measure brain activity patterns elicited by naturalistic videos of famous face identities, and compared their representational distances in each region with models of the differences between identities. We built erse candidate models, ranging from low-level image-computable properties (pixel-wise, GIST, and Gabor-Jet dissimilarities), through higher-level image-computable descriptions (OpenFace deep neural network, trained to cluster faces by identity), to complex human-rated properties (perceived similarity, social traits, and gender). We found marked differences in the information represented by the FFA and OFA. Dissimilarities between face identities in FFA were accounted for by differences in perceived similarity, Social Traits, Gender, and by the OpenFace network. In contrast, representational distances in OFA were mainly driven by differences in low-level image-based properties (pixel-wise and Gabor-Jet dissimilarities). Our results suggest that, although FFA and OFA can both discriminate between identities, the FFA representation is further removed from the image, encoding higher-level perceptual and social face information. SIGNIFICANCE STATEMENT Recent studies using fMRI have shown that several face-responsive brain regions can distinguish between different face identities. It is however unclear whether these different face-responsive regions distinguish between identities in similar or different ways. We used representational similarity analysis to investigate the computations within three brain regions in response to naturalistically varying videos of face identities. Our results revealed that two regions, the fusiform face area and the occipital face area, encode distinct identity information about faces. Although identity can be decoded from both regions, identity representations in fusiform face area primarily contained information about social traits, gender, and high-level visual features, whereas occipital face area primarily represented lower-level image features.
Publisher: Wiley
Date: 24-09-2021
DOI: 10.1111/PHC3.12777
Abstract: Common everyday materials such as textiles, foodstuffs, soil or skin can have complex, mutable and varied appearances. Under typical viewing conditions, most observers can visually recognize materials effortlessly, and determine many of their properties without touching them. Visual material perception raises many fascinating questions for vision researchers, neuroscientists and philosophers, yet has received little attention compared to the perception of color or shape. Here we discuss some of the challenges that material perception raises and argue that further philosophical thought should be directed to how we see materials.
Publisher: Society for Neuroscience
Date: 09-09-2020
Publisher: Proceedings of the National Academy of Sciences
Date: 29-06-2022
Abstract: Human vision is attuned to the subtle differences between in idual faces. Yet we lack a quantitative way of predicting how similar two face images look and whether they appear to show the same person. Principal component–based three-dimensional (3D) morphable models are widely used to generate stimuli in face perception research. These models capture the distribution of real human faces in terms of dimensions of physical shape and texture. How well does a “face space” based on these dimensions capture the similarity relationships humans perceive among faces? To answer this, we designed a behavioral task to collect dissimilarity and same/different identity judgments for 232 pairs of realistic faces. Stimuli s led geometric relationships in a face space derived from principal components of 3D shape and texture (Basel face model [BFM]). We then compared a wide range of models in their ability to predict the data, including the BFM from which faces were generated, an active appearance model derived from face photographs, and image-computable models of visual perception. Euclidean distance in the BFM explained both dissimilarity and identity judgments surprisingly well. In a comparison against 16 erse models, BFM distance was competitive with representational distances in state-of-the-art deep neural networks (DNNs), including novel DNNs trained on BFM synthetic identities or BFM latents. Models capturing the distribution of face shape and texture across in iduals are not only useful tools for stimulus generation. They also capture important information about how faces are perceived, suggesting that human face representations are tuned to the statistical distribution of faces.
Publisher: Proceedings of the National Academy of Sciences
Date: 04-08-2021
Abstract: We perceive our environment through multiple independent sources of sensory input. The brain is tasked with deciding whether multiple signals are produced by the same or different events (i.e., solve the problem of causal inference). Here, we train a neural network to solve causal inference by either combining or separating visual and vestibular inputs in order to estimate self- and scene motion. We find that the network recapitulates key neurophysiological (i.e., congruent and opposite neurons) and behavioral (e.g., reliability-based cue weighting) properties of biological systems. We show how congruent and opposite neurons support motion estimation and how the balance in activity between these subpopulations determines whether to combine or separate multisensory signals.
Publisher: Elsevier BV
Date: 09-2012
DOI: 10.1016/J.BIOPSYCHO.2012.05.008
Abstract: Immersive virtual environment technology is increasingly used by psychologists as a tool for researching social influence in realistic, yet experimentally controllable, settings. The present study demonstrates the validity and reliability of facial electromyography as a marker of affect in immersive virtual environments and further shows that the mere presence of virtual humans is enough to elicit sociality effects on facial expressiveness. Participants viewed pleasant and unpleasant images in a virtual room either alone or with two virtual humans present. The patterns of smiling and frowning activity elicited by positive and negative stimuli in the virtual environment were the same as those found in laboratory settings. Moreover, when viewing positive stimuli, smiling activity was greater when two agents were present than in the alone condition. The results provide new psychophysiological evidence for the potency of social agents in immersive virtual environments.
Publisher: Society for Neuroscience
Date: 15-06-2016
Publisher: Frontiers Media SA
Date: 09-10-2017
Publisher: Elsevier BV
Date: 10-2016
DOI: 10.1016/J.NEURON.2016.10.006
Abstract: "Grid cells" encode an animal's location and direction of movement in 2D physical environments via regularly repeating receptive fields. Constantinescu et al. (2016) report the first evidence of grid cells for 2D conceptual spaces. The work has exciting implications for mental representation and shows how detailed neural-coding hypotheses can be tested with bulk population-activity measures.
Publisher: Frontiers Media SA
Date: 19-02-2015
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 06-2015
DOI: 10.1167/15.8.1
Abstract: After looking at a photograph of someone for a protracted period (adaptation), a previously neutral-looking face can take on an opposite appearance in terms of gender, identity, and other attributes-but what happens to the appearance of other faces? Face aftereffects have repeatedly been ascribed to perceptual renormalization. Renormalization predicts that the adapting face and more extreme versions of it should appear more neutral after adaptation (e.g., if the adaptor was male, it and hyper-masculine faces should look more feminine). Other aftereffects, such as tilt and spatial frequency, are locally repulsive, exaggerating differences between adapting and test stimuli. This predicts that the adapting face should be little changed in appearance after adaptation, while more extreme versions of it should look even more extreme (e.g., if the adaptor was male, it should look unchanged, while hyper-masculine faces should look even more masculine). Existing reports do not provide clear evidence for either pattern. We overcame this by using a spatial comparison task to measure the appearance of stimuli presented in differently adapted retinal locations. In behaviorally matched experiments we compared aftereffect patterns after adapting to tilt, facial identity, and facial gender. In all three experiments data matched the predictions of a locally repulsive, but not a renormalizing, aftereffect. These data are consistent with the existence of similar encoding strategies for tilt, facial identity, and facial gender.
Publisher: Cold Spring Harbor Laboratory
Date: 14-05-2020
DOI: 10.1101/2020.05.12.090878
Abstract: Faces of different people elicit distinct functional MRI (fMRI) patterns in several face-selective brain regions. Here we used representational similarity analysis to investigate what type of identity-distinguishing information is encoded in three face-selective regions: fusiform face area (FFA), occipital face area (OFA), and posterior superior temporal sulcus (pSTS). We used fMRI to measure brain activity patterns elicited by naturalistic videos of famous face identities, and compared their representational distances in each region with models of the differences between identities. Models included low-level to high-level image-computable properties and complex human-rated properties. We found that the FFA representation reflected perceived face similarity, social traits, and gender, and was well accounted for by the OpenFace model (deep neural network, trained to cluster faces by identity). The OFA encoded low-level image-based properties (pixel-wise and Gabor-jet dissimilarities). Our results suggest that, although FFA and OFA can both discriminate between identities, the FFA representation is further removed from the image, encoding higher-level perceptual and social face information.
Publisher: Springer Science and Business Media LLC
Date: 17-01-2017
Publisher: Elsevier BV
Date: 07-2012
DOI: 10.1016/J.VISRES.2012.04.020
Abstract: After prolonged exposure to a female face, faces that had previously seemed androgynous are more likely to be judged as male. Similarly, after prolonged exposure to a face with expanded features, faces that had previously seemed normal are more likely to be judged as having contracted features. These facial aftereffects have both been attributed to the impact of adaptation upon a norm-based opponent code, akin to low-level analyses of colour. While a good deal of evidence is consistent with this, some recent data is contradictory, motivating a more rigorous test. In behaviourally matched tasks we compared the characteristics of aftereffects generated by adapting to colour, to expanded or contracted faces, and to male or female faces. In our experiments opponent coding predicted that the appearance of the adapting image should change and that adaptation should induce symmetrical shifts of two category boundaries. This combination of predictions was firmly supported for colour adaptation, somewhat supported for facial distortion aftereffects, but not supported for facial gender aftereffects. Interestingly, the two face aftereffects we tested generated discrepant patterns of response shifts. Our data suggest that superficially similar aftereffects can ensue from mechanisms that differ qualitatively, and therefore that not all high-level categorical face aftereffects can be attributed to a common coding strategy.
Publisher: IEEE
Date: 06-2018
Publisher: Pion Ltd
Date: 04-2015
DOI: 10.1068/I0725JC
Publisher: Cold Spring Harbor Laboratory
Date: 10-04-2021
DOI: 10.1101/2021.04.09.438859
Abstract: Human vision is attuned to the subtle differences between in idual faces. Yet we lack a quantitative way of predicting how similar two face images look, or whether they appear to show the same person. Principal-components-based 3D morphable models are widely used to generate stimuli in face perception research. These models capture the distribution of real human faces in terms of dimensions of physical shape and texture. How well does a “face space” defined to model the distribution of faces as an isotropic Gaussian explain human face perception? We designed a behavioural task to collect dissimilarity and same/different identity judgements for 232 pairs of realistic faces. The stimuli densely s led geometric relationships in a face space derived from principal components of 3D shape and texture (Basel Face Model, BFM). We then compared a wide range of models in their ability to predict the data, including the BFM from which faces were generated, a 2D morphable model derived from face photographs, and image-computable models of visual perception. Euclidean distance in the BFM explained both similarity and identity judgements surprisingly well. In a comparison against 14 alternative models, we found that BFM distance was competitive with representational distances in state-of-the-art image-computable deep neural networks (DNNs), including a novel DNN trained on BFM identities. Models describing the distribution of facial features across in iduals are not only useful tools for stimulus generation. They also capture important information about how faces are perceived, suggesting that human face representations are tuned to the statistical distribution of faces.
Publisher: Elsevier BV
Date: 2015
Publisher: MIT Press - Journals
Date: 19-08-2021
DOI: 10.1162/JOCN_A_01755
Abstract: Deep neural networks (DNNs) trained on object recognition provide the best current models of high-level visual cortex. What remains unclear is how strongly experimental choices, such as network architecture, training, and fitting to brain data, contribute to the observed similarities. Here, we compare a erse set of nine DNN architectures on their ability to explain the representational geometry of 62 object images in human inferior temporal cortex (hIT), as measured with fMRI. We compare untrained networks to their task-trained counterparts and assess the effect of cross-validated fitting to hIT, by taking a weighted combination of the principal components of features within each layer and, subsequently, a weighted combination of layers. For each combination of training and fitting, we test all models for their correlation with the hIT representational dissimilarity matrix, using independent images and subjects. Trained models outperform untrained models (accounting for 57% more of the explainable variance), suggesting that structured visual features are important for explaining hIT. Model fitting further improves the alignment of DNN and hIT representations (by 124%), suggesting that the relative prevalence of different features in hIT does not readily emerge from the Imagenet object-recognition task used to train the networks. The same models can also explain the disparate representations in primary visual cortex (V1), where stronger weights are given to earlier layers. In each region, all architectures achieved equivalently high performance once trained and fitted. The models' shared properties—deep feedforward hierarchies of spatially restricted nonlinear filters—seem more important than their differences, when modeling human visual representations.
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 29-07-2014
DOI: 10.1167/14.8.25
Abstract: Humans are experts at face recognition. The mechanisms underlying this complex capacity are not fully understood. Recently, it has been proposed that face recognition is supported by a coarse-scale analysis of visual information contained in horizontal bands of contrast distributed along the vertical image axis-a biological facial "barcode" (Dakin & Watt, 2009). A critical prediction of the facial barcode hypothesis is that the distribution of image contrast along the vertical axis will be more important for face recognition than image distributions along the horizontal axis. Using a novel paradigm involving dynamic image distortions, a series of experiments are presented examining famous face recognition impairments from selectively disrupting image distributions along the vertical or horizontal image axes. Results show that disrupting the image distribution along the vertical image axis is more disruptive for recognition than matched distortions along the horizontal axis. Consistent with the facial barcode hypothesis, these results suggest that human face recognition relies disproportionately on appropriately scaled distributions of image contrast along the vertical image axis.
Publisher: Elsevier BV
Date: 12-2019
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 26-01-2015
DOI: 10.1167/15.1.26
Abstract: Some data have been taken as evidence that after prolonged viewing, near-vertical orientations "normalize" to appear more vertical than they did previously. After almost a century of research, the existence of tilt normalization remains controversial. The most recent evidence for tilt normalization comes from data suggesting a measurable "perceptual drift" of near-vertical adaptors toward vertical, which can be nulled by a slight physical rotation away from vertical (Müller, Schillinger, Do, & Leopold, 2009). We argue that biases in estimates of perceptual stasis could, however, result from the anisotropic organization of orientation-selective neurons in V1, with vertically-selective cells being more narrowly tuned than obliquely-selective cells. We describe a neurophysiologically plausible model that predicts greater sensitivity to orientation displacements toward than away from vertical. We demonstrate the predicted asymmetric pattern of sensitivity in human observers by determining threshold speeds for detecting rotation direction (Experiment 1), and by determining orientation discrimination thresholds for brief static stimuli (Experiment 2). Results imply that data suggesting a perceptual drift toward vertical instead result from greater discrimination sensitivity around cardinal than oblique orientations (the oblique effect), and thus do not constitute evidence for tilt normalization.
Publisher: Springer Science and Business Media LLC
Date: 06-05-2021
DOI: 10.1038/S41562-021-01097-6
Abstract: Reflectance, lighting and geometry combine in complex ways to create images. How do we disentangle these to perceive in idual properties, such as surface glossiness? We suggest that brains disentangle properties by learning to model statistical structure in proximal images. To test this hypothesis, we trained unsupervised generative neural networks on renderings of glossy surfaces and compared their representations with human gloss judgements. The networks spontaneously cluster images according to distal properties such as reflectance and illumination, despite receiving no explicit information about these properties. Intriguingly, the resulting representations also predict the specific patterns of ‘successes’ and ‘errors’ in human perception. Linearly decoding specular reflectance from the model’s internal code predicts human gloss perception better than ground truth, supervised networks or control models, and it predicts, on an image-by-image basis, illusions of gloss perception caused by interactions between material, shape and lighting. Unsupervised learning may underlie many perceptual dimensions in vision and beyond.
Location: United Kingdom of Great Britain and Northern Ireland
Location: Germany
Location: United Kingdom of Great Britain and Northern Ireland
Start Date: 2022
End Date: 2025
Funder: Marsden Fund
View Funded Activity