ORCID Profile
0000-0002-2035-2084
Current Organisations
Bond University
,
University of York
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
Publisher: Informa UK Limited
Date: 02-06-2017
Publisher: Wiley
Date: 14-09-2020
DOI: 10.1002/ACP.3739
Publisher: Elsevier BV
Date: 07-2013
DOI: 10.1016/J.COGNITION.2013.03.006
Abstract: When viewers are shown sets of similar objects (for ex le circles), they may extract summary information (e.g., average size) while retaining almost no information about the in idual items. A similar observation can be made when using sets of unfamiliar faces: Viewers tend to merge identity or expression information from the set exemplars into a single abstract representation, the set average. Here, across four experiments, sets of well-known, famous faces were presented. In response to a subsequent probe, viewers recognized the in idual faces very accurately. However, they also reported having seen a merged 'average' of these faces. These findings suggest abstraction of set characteristics even in circumstances which favor in iduation of the items. Moreover, the present data suggest that, although seemingly incompatible, exemplar and average representations co-exist for sets consisting of famous faces. This result suggests that representations are simultaneously formed at multiple levels of abstraction.
Publisher: Elsevier BV
Date: 09-2000
DOI: 10.1016/S1364-6613(00)01519-9
Abstract: People are excellent at identifying faces familiar to them, even from very low quality images, but are bad at recognizing, or even matching, unfamiliar faces. In this review we shall consider some of the factors that affect our abilities to match unfamiliar faces. Major differences in orientation (e.g. inversion) or greyscale information (e.g. negation) affect face processing dramatically, and such effects suggest that representations derived from unfamiliar faces are based on relatively low-level image descriptions. Consistent with this, even relatively minor differences in lighting and viewpoint create problems for human face matching, leading to potentially important problems for the use of images from security videos. The relationships between different parts of the face (its 'configuration') are as important to the impression created of an upright face as the local features themselves, suggesting further constraints on the representations derived from faces. We go on to consider the contribution of computer face-recognition systems to the understanding of the theory and the practical problems of face identification. Finally, we look to the future of research in this area that will incorporate motion and 3-D shape information.
Publisher: Informa UK Limited
Date: 08-2003
Publisher: Elsevier BV
Date: 09-1992
Publisher: Springer Science and Business Media LLC
Date: 28-03-2022
Publisher: SAGE Publications
Date: 13-08-2019
Abstract: First impressions formed after seeing someone’s face or hearing their voice can affect many social decisions, including voting in political elections. Despite the many studies investigating the independent contribution of face and voice cues to electoral success, their integration is still not well understood. Here, we examine a novel electoral context, student representative ballots, allowing us to test the generalizability of previous studies. We also examine the independent contributions of visual, auditory, and audiovisual information to social judgments of the candidates, and their relationship to election outcomes. Results showed that perceived trustworthiness was the only trait significantly related to election success. These findings contrast with previous reports on the importance of perceived competence using audio or visual cues only in the context of national political elections. The present study highlights the role of real-world context and emphasizes the importance of using ecologically valid stimulus presentation in understanding real-life social judgment.
Publisher: The Royal Society
Date: 10-10-2018
Abstract: Over our species history, humans have typically lived in small groups of under a hundred in iduals. However, our face recognition abilities appear to equip us to recognize very many in iduals, perhaps thousands. Modern society provides access to huge numbers of faces, but no one has established how many faces people actually know. Here, we describe a method for estimating this number. By combining separate measures of recall and recognition, we show that people know about 5000 faces on average and that in idual differences are large. Our findings offer a possible explanation for large variation in identification performance. They also provide constraints on understanding the qualitative differences between perception of familiar and unfamiliar faces—a distinction that underlies all current theories of face recognition.
Publisher: Elsevier BV
Date: 11-2005
DOI: 10.1016/J.COGPSYCH.2005.06.003
Abstract: We are able to recognise familiar faces easily across large variations in image quality, though our ability to match unfamiliar faces is strikingly poor. Here we ask how the representation of a face changes as we become familiar with it. We use a simple image-averaging technique to derive abstract representations of known faces. Using Principal Components Analysis, we show that computational systems based on these averages consistently outperform systems based on collections of instances. Furthermore, the quality of the average improves as more images are used to derive it. These simulations are carried out with famous faces, over which we had no control of superficial image characteristics. We then present data from three experiments demonstrating that image averaging can also improve recognition by human observers. Finally, we describe how PCA on image averages appears to preserve identity-specific face information, while eliminating non-diagnostic pictorial information. We therefore suggest that this is a good candidate for a robust face representation.
Publisher: American Association for the Advancement of Science (AAAS)
Date: 25-01-2008
Abstract: Accurate face recognition is critical for many security applications. Current automatic face-recognition systems are defeated by natural changes in lighting and pose, which often affect face images more profoundly than changes in identity. The only system that can reliably cope with such variability is a human observer who is familiar with the faces concerned. We modeled human familiarity by using image averaging to derive stable face representations from naturally varying photographs. This simple procedure increased the accuracy of an industry standard face-recognition algorithm from 54% to 100%, bringing the robust performance of a familiar human to an automated system.
Publisher: John Benjamins Publishing Company
Date: 17-05-2000
DOI: 10.1075/PC.8.1.03CAR
Abstract: It is well established that retrieval of names is harder than the retrieval of other identity specific information. This paper offers a review of the more influential accounts put forward as explanations of why names are so difficult to retrieve. A series of five experiments tests a number of these accounts. Experiments One to Three examine the claims that names are hard to recall because they are typically meaningless (Cohen 1990), or unique (Burton and Bruce 1992 Brédart, Valentine, Calder, and Gassi 1995). Participants are shown photographs of unfamiliar people (Experiments One and Two) or familiar people (Experiment Three) and given three pieces of information about each: a name, a unique piece of information, and a shared piece of information. Learning follows an incidental procedure, and participants are given a surprise recall test. In each experiment shared information is recalled most often, followed by unique information, followed by name. Experiment Four tests both the ‘uniqueness’ account and an account based on the specificity of the naming response (Brédart 1993). Participants are presented with famous faces and asked to categorise them by semantic group (occupation). Results indicate that less time is needed to perform this task when the group is a subset of a larger semantic category. A final experiment examines the claim that names might take longer to access because they are less often retrieved than other classes of information. Latencies show that participants remain more efficient when categorising faces by their occupation than by their name even when they have received extra practice of naming the faces. We conclude that the explanation best able to account for the data is that names are stored separately from other semantic information and can only be accessed after other identity specific information has been retrieved. However, we also argue that the demands we make of these explanations make it likely that no single theory will be able to account for all existing data.
Publisher: Center for Open Science
Date: 30-08-2023
Abstract: Provoked overt recognition refers to the fact that patients with acquired prosopagnosia can sometimes recognise faces when presented with arrays of in iduals from the same category (e.g. actors or politicians). Here we ask whether a prosopagnosic patient might experience recognition when presented with multiple different images of the same famous face simultaneously. Over two testing sessions, patient Herschel, a 66-year-old British man with acquired prosopagnosia viewed face images in idually or in arrays. On several occasions he failed to recognise single photos of an in idual but successfully identified that person when the same photos were presented together. For ex le, Herschel failed to recognise any in idual images of King Charles III, four days after he had acceded to the throne (i.e. at peak media exposure) but nevertheless recognised him in an array of these same pictures. Like prior reports of provoked recognition based on category membership, overt recognition here was transient and inconsistent over in idual faces. These findings are discussed in terms of models of covert recognition, alongside more recent research on the importance of within-person variability for face perception.
Publisher: Elsevier BV
Date: 2008
DOI: 10.1016/J.NEUROPSYCHOLOGIA.2008.01.001
Abstract: The bilateral advantage for the perception of famous faces was investigated using a redundant target procedure. In experiment 1 we compared simultaneous presentation of stimuli (a) bilaterally and (b) one above the other in the central field. Results showed a redundancy advantage, but only when faces were presented bilaterally. This result lends support to the notion of interhemispheric communication using cross-hemisphere representations. Experiment 2 examined the nature of such communication by comparing bilateral presentation of identical face images, with bilateral presentation of different images of the same person. When asked to make a familiar/unfamiliar face judgement, participants showed evidence for a redundancy advantage under both bilateral conditions. This suggests that the nature of the information shared in interhemispheric communication is abstract, rather than being tied to superficial stimulus properties.
Publisher: American Psychological Association (APA)
Date: 04-2016
DOI: 10.1037/XHP0000174
Abstract: Familiar faces are remembered better than unfamiliar faces. Furthermore, it is much easier to match images of familiar than unfamiliar faces. These findings could be accounted for by quantitative differences in the ease with which faces are encoded. However, it has been argued that there are also some qualitative differences in familiar and unfamiliar face processing. Unfamiliar faces are held to rely on superficial, pictorial representations, whereas familiar faces invoke more abstract representations. Here we present 2 studies that show, for 1 task, an advantage for unfamiliar faces. In recognition memory, viewers are better able to reject a new picture, if it depicts an unfamiliar face. This rare advantage for unfamiliar faces supports the notion that familiarity brings about some representational changes, and further emphasizes the idea that theoretical accounts of face processing should incorporate familiarity. (PsycINFO Database Record
Publisher: SAGE Publications
Date: 13-05-2019
Abstract: Models of social evaluation aim to capture the information people use to form first impressions of unfamiliar others. However, little is currently known about the relationship between perceived traits across gender. In Study 1, we asked viewers to provide ratings of key social dimensions (dominance, trustworthiness, etc.) for multiple images of 40 unfamiliar identities. We observed clear sex differences in the perception of dominance—with negative evaluations of high dominance in unfamiliar females but not males. In Study 2, we used the social evaluation context to investigate the key predictions about the importance of pictorial information in familiar and unfamiliar face processing. We compared the consistency of ratings attributed to different images of the same identities and demonstrated that ratings of images depicting the same familiar identity are more tightly clustered than those of unfamiliar identities. Such results imply a shift from image rating to person rating with increased familiarity, a finding which generalises results previously observed in studies of identification.
Publisher: SAGE Publications
Date: 08-2013
DOI: 10.1080/17470218.2013.800125
Abstract: Despite many years of research, there has been surprisingly little progress in our understanding of how faces are identified. Here I argue that there are two contributory factors: (a) Our methods have obscured a critical aspect of the problem, within-person variability and (b) research has tended to conflate familiar and unfamiliar face processing. Ex les of procedures for studying variability are given, and a case is made for studying real faces, of the type people recognize every day. I argue that face recognition (specifically identification) may only be understood by adopting new techniques that acknowledge statistical patterns in the visual environment. As a consequence, some of our current methods will need to be abandoned.
Publisher: IEEE
Date: 2006
DOI: 10.1109/FGR.2006.45
Publisher: Wiley
Date: 2001
DOI: 10.1002/ACP.718
Publisher: SAGE Publications
Date: 03-02-2020
Abstract: Hyper-realistic face masks have been used as disguises in at least one border crossing and in numerous criminal cases. Experimental tests using these masks have shown that viewers accept them as real faces under a range of conditions. Here, we tested mask detection in a live identity verification task. Fifty-four visitors at the London Science Museum viewed a mask wearer at close range (2 m) as part of a mock passport check. They then answered a series of questions designed to assess mask detection, while the masked traveller was still in view. In the identity matching task, 8% of viewers accepted the mask as matching a real photo of someone else, and 82% accepted the match between masked person and masked photo. When asked if there was any reason to detain the traveller, only 13% of viewers mentioned a mask. A further 11% picked disguise from a list of suggested reasons. Even after reading about mask-related fraud, 10% of viewers judged that the traveller was not wearing a mask. Overall, mask detection was poor and was not predicted by unfamiliar face matching performance. We conclude that hyper-realistic face masks could go undetected during live identity checks.
Publisher: SAGE Publications
Date: 08-2017
DOI: 10.1080/17470218.2016.1195851
Abstract: Natural variability between instances of unfamiliar faces can make it difficult to reconcile two images as the same person. Yet for familiar faces, effortless recognition occurs even with considerable variability between images. To explore how stable face representations develop, we employed incidental learning in the form of a face sorting task. In each trial, multiple images of two facial identities were sorted into two corresponding piles. Following the sort, participants showed evidence of having learnt the faces performing more accurately on a matching task with seen than with unseen identities. Furthermore, ventral temporal event-related potentials were more negative in the N250 time range for previously seen than for previously unseen identities. These effects appear to demonstrate some degree of abstraction, rather than simple picture learning, as the neurophysiological and behavioural effects were observed with novel images of the previously seen identities. The results provide evidence of the development of facial representations, allowing a window onto natural mechanisms of face learning.
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 15-03-2002
DOI: 10.1167/2.7.610
Publisher: Elsevier BV
Date: 08-2015
DOI: 10.1016/J.COGNITION.2015.05.002
Abstract: Matching two different images of a face is a very easy task for familiar viewers, but much harder for unfamiliar viewers. Despite this, use of photo-ID is widespread, and people appear not to know how unreliable it is. We present a series of experiments investigating bias both when performing a matching task and when predicting other people's performance. Participants saw pairs of faces and were asked to make a same/different judgement, after which they were asked to predict how well other people, unfamiliar with these faces, would perform. In four experiments we show different groups of participants familiar and unfamiliar faces, manipulating this in different ways: celebrities in experiments 1-3 and personally familiar faces in experiment 4. The results consistently show that people match images of familiar faces more accurately than unfamiliar faces. However, people also reliably predict that the faces they themselves know will be more accurately matched by different viewers. This bias is discussed in the context of current theoretical debates about face recognition, and we suggest that it may underlie the continued use of photo-ID, despite the availability of evidence about its unreliability.
Publisher: Ovid Technologies (Wolters Kluwer Health)
Date: 28-06-2004
DOI: 10.1097/01.WNR.0000131675.00319.42
Abstract: We investigated event-related brain potentials elicited by repetitions of cars, ape faces, and upright and inverted human faces. A face-selective N250r response to repetitions emerged over right temporal regions, consistent with a source in the fusiform gyrus. N250r was largest for human faces, clear for ape faces, non-significant for inverted faces, and completely absent for cars. Our results suggest that face-selective neural activity starting at 200 ms and peaking at 250-300 ms is sensitive to repetition and relates to in idual recognition.
Publisher: Elsevier BV
Date: 04-2008
DOI: 10.1016/J.COGNITION.2007.07.012
Abstract: We report three experiments that investigate whether faces are capable of capturing attention when in competition with other non-face objects. In Experiment 1a participants took longer to decide that an array of objects contained a butterfly target when a face appeared as one of the distracting items than when the face did not appear in the array. This irrelevant face effect was eliminated when the items in the arrays were inverted in Experiment 1b ruling out an explanation based on some low-level image-based properties of the faces. Experiment 2 replicated and extended the results of Experiment 1a. Irrelevant faces once again interfered with search for butterflies but, when the roles of faces and butterflies were reversed, irrelevant butterflies no longer interfered with search for faces. This suggests that the irrelevant face effect is unlikely to have been caused by the relative novelty of the faces or arises because butterflies and faces were the only animate items in the arrays. We conclude that these experiments offer evidence of a stimulus-driven capture of attention by faces.
Publisher: Elsevier BV
Date: 09-1994
Publisher: Wiley
Date: 08-1990
DOI: 10.1111/J.2044-8295.1990.TB02367.X
Abstract: In this paper we describe how the microstructure of the Bruce & Young (1986) functional model of face recognition may be explored and extended using an interactive activation implementation. A simulation of the recognition of familiarity of in iduals is developed which accounts for a range of published findings on the effects of semantic priming, repetition priming and distinctiveness. Finally, we offer some speculative predictions made by the model, and point to an empirical programme of research which it suggests.
Publisher: IEEE Comput. Soc
Date: 1998
Publisher: Elsevier BV
Date: 12-2016
DOI: 10.1016/J.NEUROPSYCHOLOGIA.2016.10.004
Abstract: Familiar face recognition is remarkably invariant across huge image differences, yet little is understood concerning how image-invariant recognition is achieved. To investigate the neural correlates of invariance, we localized the core face-responsive regions and then compared the pattern of fMR-adaptation to different stimulus transformations in each region to behavioural data demonstrating the impact of the same transformations on familiar face recognition. In Experiment 1, we compared linear transformations of size and aspect ratio to a non-linear transformation affecting only part of the face. We found that adaptation to facial identity in face-selective regions showed invariance to linear changes, but there was no invariance to non-linear changes. In Experiment 2, we measured the sensitivity to non-linear changes that fell within the normal range of variation across face images. We found no adaptation to facial identity for any of the non-linear changes in the image, including to faces that varied in different levels of caricature. These results show a compelling difference in the sensitivity to linear compared to non-linear image changes in face-selective regions of the human brain that is only partially consistent with their effect on behavioural judgements of identity. We conclude that while regions such as the FFA may well be involved in the recognition of face identity, they are more likely to contribute to some form of normalisation that underpins subsequent recognition than to form the neural substrate of recognition per se.
Publisher: Public Library of Science (PLoS)
Date: 22-03-2017
Publisher: Elsevier BV
Date: 2001
Publisher: Wiley
Date: 17-08-2018
DOI: 10.1002/ACP.3449
Publisher: Elsevier BV
Date: 12-2010
Publisher: Wiley
Date: 11-10-2011
Publisher: SAGE Publications
Date: 08-2011
DOI: 10.1080/17470218.2011.575228
Abstract: Viewers are typically better at remembering faces from their own race than from other races however, it is not yet established whether this effect is due to memorial or perceptual processes. In this study, UK and Egyptian viewers were given a simultaneous face-matching task, in which the target faces were presented upright or upside down. As with previous research using face memory tasks, participants were worse at matching other-race faces than own-race faces and showed a stronger face inversion effect for own-race faces. However, subjects' performance on own and other-race faces was highly correlated. These data provide strong evidence that difficulty in perceptual encoding of unfamiliar faces contributes substantially to the other-race effect and that accounts based entirely on memory cannot capture the full data. Implications for forensic settings are also discussed.
Publisher: Hogrefe Publishing Group
Date: 2007
DOI: 10.1027/1618-3169.54.3.192
Abstract: Abstract. There is evidence that face processing is capacity-limited in distractor interference tasks and in tasks requiring overt recognition memory. We examined whether capacity limits for faces can be observed with a more sensitive measure of visual processing, by measuring repetition priming of flanker faces that were presented alongside a face or a nonface target. In Experiment 1, we found identity priming for face flankers, by measuring repetition priming across a change in image, during task-relevant nonface processing, but not during the processing of a concurrently-presented face target. Experiment 2 showed perceptual priming of the flanker faces, across identical images at prime and test, when they were presented alongside a face target. In a third Experiment, all of these effects were replicated by measuring identity priming and perceptual priming within the same task. Overall, these results imply that face processing is capacity limited, such that only a single face can be identified at one time. Merely attending to a target face appears sufficient to trigger these capacity limits, thereby extinguishing identification of a second face in the display, although our results demonstrate that the additional face remains at least subject to superficial image processing.
Publisher: Wiley
Date: 11-2006
Publisher: Wiley
Date: 08-2003
DOI: 10.1348/000712603767876271
Abstract: In the Serial Reaction Time (SRT) task, participants respond to a set of stimuli the order of which is apparently random, but which consists of repeating sub-sequences. Participants can become sensitive to this regularity, as measured by an indirect test of reaction time, but can remain apparently unaware of the sequence, as measured by direct tests of prediction or recognition. Some researchers have claimed that this learning may take place by observation alone. We suggest that observational learning may be due to explicit acquired knowledge of the sequence, and is not mediated by the same processes which give rise to learning by action. In Expt 1, we show that it is very difficult to acquire explicit sequence knowledge under dual task conditions, even when participants are told that a regular sequence exists. In Expt 2, we use the same conditions to compare actors, who respond to the sequence during learning, and observers, who merely watch the stimuli. Furthermore, we manipulate the salience of the sequence, in order to encourage learning. There is no evidence of observational learning in these conditions, despite the usual effects of learning being demonstrated by actors. In Expt 3, we show that observational learning does occur, but only when observers have no secondary task and even then only reliably for a sequence which has been made salient by chunking subcomponents. We conclude that sequence learning by observation is mediated by explicit processes, and is eliminated under conditions which support learning by action, but make it difficult to acquire explicit knowledge.
Publisher: Elsevier BV
Date: 07-2003
DOI: 10.1016/S0926-6410(03)00131-9
Abstract: In face identification, it has been controversial whether or not access to biographical information and to a person's name are mediated by qualitatively different loci. We recorded ERPs while participants saw two successive faces and performed a matching task that either required retrieval of semantic information ("same or different profession?"), or retrieval of the person's name ("same or different number of forename syllables?"). For both tasks, slow ERP activity between the first and the second face was characterized by a prominent right posterior negativity, with the asymmetry being larger for the name than the semantic matching task. ERPs to the second face showed a difference between congruent (matching) and incongruent (mismatching) trials, with more negative ERPs for incongruent trials. In the semantic matching task, these differences were significant between 450 and 550 ms, and resembled an N400, with a maximum negativity over the vertex. In the name matching task, the topography of this congruency effect was qualitatively different from that seen in semantic matching. These findings suggest that different brain substrates mediate the access to semantic and name information.
Publisher: American Psychological Association (APA)
Date: 09-2021
DOI: 10.1037/XGE0001019
Publisher: Wiley
Date: 22-05-2008
DOI: 10.1111/J.1469-8986.2008.00663.X
Abstract: The N250r is an event-related potential that has been related to activation of image-independent representations of familiar faces during recognition. However, N250r also shows a degree of image specificity, with reduced activation across repetitions of different images of the same face compared to repetitions across the same image, suggesting a component that codes the visual overlap between two face images. This study investigated whether N250r is equally attenuated when horizontally or vertically stretched faces prime an unstretched image of the same face. The results confirm that N250r is larger across repetitions of the same face image than across different images of the same face. Despite this, N250r was equivalent for priming by the same face image and priming from stretched onto unstretched faces. This finding demonstrates that N250r does not simply reflect the superficial visual overlap between two face images and supports the notion that it is related to person recognition.
Publisher: SAGE Publications
Date: 10-2011
DOI: 10.1080/17470218.2011.603052
Abstract: Empirical data regarding the extent of face recognition abnormalities in autism spectrum disorder (ASD) is inconsistent. Here, 27 ASD and 47 typically developing (TD) children completed an immediate two-alternative forced-choice identity matching task. We contrasted recognition of own- and other-race faces, and, counter to prediction, we found a typical advantage for recognizing own- over other-race faces in both the ASD and TD groups. In addition, ASD and TD groups responded similarly to stimulus manipulations (use of identical or different photographs for identity matching and cropping stimuli to remove hair information). However, age-standardized scores varied widely within the ASD s le, and a subgroup of ASD participants with impaired face recognition did not exhibit a significant own-race recognition advantage. An explanation regarding early experience with faces is considered, and implications for research of in idual variation within ASD are discussed.
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 27-07-2007
DOI: 10.1167/7.10.15
Publisher: Elsevier BV
Date: 12-2007
DOI: 10.1016/J.BRAINRES.2007.09.079
Abstract: Recent studies have identified a prominent face-selective ERP response to immediate repetitions of faces approximately 250 ms (N250r) which was strongly attenuated or eliminated for control stimuli (Schweinberger, Huddy, and Burton 2004, NeuroReport, 15, 1501-1505). In the present study we used a 148-channel whole head neuromagnetometer to investigate event-related magnetic fields (ERMFs) elicited by repetitions of exemplars of human faces, inverted human faces, primate faces, and car fronts. Participants counted rare pictures of butterflies interspersed in a series of pairs of one of these categories. The second stimulus of each pair could either be a repetition or a non-repetition of the first stimulus. We observed prominent M100 (90-140 ms) and M170 (140-220 ms) responses. Both M100 and M170 were insensitive to repetition and showed little differences between stimulus categories, except for a slight increase and delay of M170 to inverted faces. By contrast, we observed a repetition-sensitive M250r response (220-330 ms). This M250r was larger for upright human and primate faces when compared to both inverted human faces and cars, a finding that was specific for right hemispheric sensors. Source localization suggested different generators for M170 and M250r in occipitotemporal and fusiform areas, respectively. These findings suggest that repetition-sensitive brain activity approximately 250 ms reflects the transient activation of object representations, with largest responses for upright faces, in the right hemisphere.
Publisher: SAGE Publications
Date: 2010
DOI: 10.1068/P6727
Abstract: As faces become familiar, recognition becomes easier but the style of processing also changes. Here, twenty-one typically developing (TD) children and twenty-one children with autism spectrum disorder (ASD) were familiarised with 6 identities over 3 days. Next, they completed a 4-alternative forced-choice matching test in which targets were the 6 familiarised faces and 6 unfamiliar faces. The TD group showed a significant advantage for familiarised faces when matching whole faces and both internal and external facial regions. The ASD group showed similar familiarisation effects for whole and external faces, but not for internal regions. The ASD group was also impaired at matching eyes and mouths of familiarised faces. Results suggest the process of acquiring familiarity with faces differs from ASD and TD children.
Publisher: Elsevier BV
Date: 03-2017
DOI: 10.1016/J.NEUROIMAGE.2017.01.043
Abstract: There is growing evidence that the occipital face area (OFA), originally thought to be involved in the construction of a low-level representation of the physical features of a face, is also taking part in higher-level face processing. To test whether the OFA is causally involved in the learning of novel face identities, we have used transcranial magnetic stimulation (TMS) together with a sequential sorting - face matching paradigm (Andrews et al. 2015). First, participants sorted images of two unknown persons during the initial learning phase while either their right OFA or the Vertex was stimulated using TMS. In the subsequent test phase, we measured the participants' face matching performance for novel images of the previously trained identities and for two novel identities. We found that face-matching performance accuracy was higher for the trained as compared to the novel identities in the vertex control group, suggesting that the sorting task led to incidental learning of the identities involved. However, no such difference was observed between trained and novel identities in the rOFA stimulation group. Our results support the hypothesis that the role of the rOFA is not limited to the processing of low-level physical features, but it has a significant causal role in face identity encoding and in the formation of identity-specific memory-traces.
Publisher: Public Library of Science (PLoS)
Date: 18-08-2014
Publisher: SAGE Publications
Date: 06-2004
DOI: 10.1068/P3458
Abstract: Two experiments are reported in which subjects made judgments about the sex or the familiarity of a voice. In experiment 1, subjects were fans of the BBC-radio soap opera, The Archers, and familiar voice clips were taken from this programme. Subjects showed a large reduction in response times when making sex judgments to familiar voices, despite the fact that sex judgments are generally much faster than familiarity judgments. In experiment 2, the same familiar clips were played to subjects unfamiliar with the soap opera, and no difference was observed in times to make sex judgments to Archers or non- Archers voices. We conclude that, unlike the case of face recognition, sex and identity processing of voices are not independent. The findings constrain models of person recognition across multiple modalities.
Publisher: SAGE Publications
Date: 11-05-2022
DOI: 10.1177/03010066221098728
Abstract: Making new acquaintances requires learning to recognise previously unfamiliar faces. In the current study, we investigated this process by staging real-world social interactions between actors and the participants. Participants completed a face-matching behavioural task in which they matched photographs of the actors (whom they had yet to meet), or faces similar to the actors (henceforth called foils). Participants were then scanned using functional magnetic resonance imaging (fMRI) while viewing photographs of actors and foils. Immediately after exiting the scanner, participants met the actors for the first time and interacted with them for 10 min. On subsequent days, participants completed a second behavioural experiment and then a second fMRI scan. Prior to each session, actors again interacted with the participants for 10 min. Behavioural results showed that social interactions improved performance accuracy when matching actor photographs, but not foil photographs. The fMRI analysis revealed a difference in the neural response to actor photographs and foil photographs across all regions of interest (ROIs) only after social interactions had occurred. Our results demonstrate that short social interactions were sufficient to learn and discriminate previously unfamiliar in iduals. Moreover, these learning effects were present in brain areas involved in face processing and memory.
Publisher: Elsevier BV
Date: 05-1995
Publisher: The Royal Society
Date: 12-06-2011
Abstract: Photographs are often used to establish the identity of an in idual or to verify that they are who they claim to be. Yet, recent research shows that it is surprisingly difficult to match a photo to a face. Neither humans nor machines can perform this task reliably. Although human perceivers are good at matching familiar faces, performance with unfamiliar faces is strikingly poor. The situation is no better for automatic face recognition systems. In practical settings, automatic systems have been consistently disappointing. In this review, we suggest that failure to distinguish between familiar and unfamiliar face processing has led to unrealistic expectations about face identification in applied settings. We also argue that a photograph is not necessarily a reliable indicator of facial appearance, and develop our proposal that summary statistics can provide more stable face representations. In particular, we show that image averaging stabilizes facial appearance by diluting aspects of the image that vary between snapshots of the same person. We review evidence that the resulting images can outperform photographs in both behavioural experiments and computer simulations, and outline promising directions for future research.
Publisher: Elsevier BV
Date: 12-2005
DOI: 10.1016/J.COGNITION.2004.11.004
Abstract: We present three experiments in which subjects were asked to make speeded sex judgements (Experiment 1) or semantic judgements (Experiments 2 and 3) to face targets and nonface items, while ignoring a solitary flanking distractor face or a nonface stimulus. Distractors could be either congruent (same response category) or incongruent (different response category) with the target. Distractor congruency effects were consistently observed in all combinations of target-distractor stimulus pairs, except when a distractor face flanked a target face. The failure to find congruency effects in this condition was explored further in a fourth experiment, in which four task-irrelevant flankers were simultaneously presented. Once again, no face-face congruency effects were found, even though comparison distractors interfered with face and nonface targets alike. However, four simultaneously presented distractor faces did not interfere with nonface targets either. We suggest that these experiments demonstrate a capacity limit for visual processing in these conditions, such that no more than one face is processed at a time.
Publisher: Oxford University Press
Date: 28-07-2011
Publisher: Ovid Technologies (Wolters Kluwer Health)
Date: 27-08-2007
Publisher: Springer Science and Business Media LLC
Date: 26-02-2001
Abstract: Two experiments examined performance in a sequence learning task. Participants were trained on a repeating sequence which was presented as a visual display and learning was measured via the increase in reaction time to respond to a new sequence. Some participants made a response to each stimulus while others merely observed the sequence. In Experiment 1 participants responding to the display via a keypress showed learning, but those merely observing did not. Five possible reasons for the failure to find observational learning were considered and the Experiment 2 attempted to resolve these. This second experiment confirmed the findings of Experiment 1 in a non-spatial sequence display using a cover story which encouraged attention to the display but not rule-search strategies. The results are discussed in relation to applied and theoretical aspects of implicit learning.
Publisher: Center for Open Science
Date: 10-06-2020
Abstract: The human face and voice are rich sources of information that can vary in many different ways. Most of the literature on face/voice perception has focussed on understanding how people look and sound different to each other (between-person variability). However, recent studies highlight the ways in which the same person can look and sound different on different occasions (within-person variability). Here, in a series of three experiments, we aimed to establish how within- and between-person variability relate to one another in the context of social trait impressions by collecting social trait ratings attributed to multiple different face images and voice recordings of the same people. We find that within-person variability in social trait evaluations is at least as great as between-person variability. Using different stimulus sets in each experiment, we consistently find that trait impressions of voices are more variable within people than between people – a pattern that is only evident occasionally when judging faces. Our findings highlight the importance of understanding within-person variability, showing how judgements of the same person can vary widely on different encounters, and quantifying how this pattern differs for the perception of voices and faces.
Publisher: American Psychological Association (APA)
Date: 2018
DOI: 10.1037/XHP0000439
Abstract: Our social evaluation of other people is influenced by their faces and their voices. However, rather little is known about how these channels combine in forming "first impressions." Over 5 experiments, we investigate the relative contributions of facial and vocal information for social judgments: dominance and trustworthiness. The experiments manipulate each of these sources of information within-person, combining faces and voices giving rise to different social attributions. We report that vocal pitch is a reliable source of information for judgments of dominance (Study 1), but not trustworthiness (Study 4). Faces and voices make reliable, but independent, contributions to social evaluation. However, voices have the larger influence in judgments of dominance (Study 2), whereas faces have the larger influence in judgments of trustworthiness (Study 5). The independent contribution of the 2 sources appears to be mandatory, as instructions to ignore 1 channel do not eliminate its influence (Study 3). Our results show that information contained in both the face and the voice contributes to first impression formation. This combination is, to some degree, outside conscious control, and the weighting of channel contribution varies according the trait being perceived. (PsycINFO Database Record
Publisher: Springer Science and Business Media LLC
Date: 30-06-2005
DOI: 10.1007/S00426-005-0226-9
Abstract: Using a serial reaction time task, this study examines whether learning of auditory sequences is possible without a corresponding motor response, i.e., by listening alone. The dual sequence paradigm used by Mayr (in Journal of the Experimental Psychology: Learning memory and cognition 22:350-354, 1996, Experiment 1) was adapted to the auditory domain. Four different actors spoke the same four colour words. These were presented such that speaker identity followed one sequence, and the word spoken followed a different sequence. Subjects were asked to respond (with a key press) to one of these dimensions (identity or word), and ignore the other. Results showed learning for either type of stimulus, but only when it was responded to. No learning of either type of auditory sequence by listening alone was found. The results add evidence to visual implicit learning studies that have failed to find learning of event sequences when spatial or response selection was not an important factor in processing. The findings are discussed in the context of implicit learning as a general and fundamental cognitive process.
Publisher: SAGE Publications
Date: 02-1994
DOI: 10.1080/14640749408401144
Abstract: Four experiments are reported which examine the nature of representations underlying an implicit learning task. When shown a series of clock faces, each bearing a time between 6 and 12 o'clock, subjects subsequently show a selection preference for novel clock faces between these times. Furthermore, they show no signs of being aware of the underlying rule governing this preference. This effect is also present when the representation of time of day is changed from analogue to digital between learning and test. In the final experiment subjects show no preference for seen over unseen clocks between these critical times. These data suggest that, for this particular task, implicit learning involves abstract representations.
Publisher: American Psychological Association (APA)
Date: 06-2014
DOI: 10.1037/XAP0000009
Abstract: Viewers find it difficult to match photos of unfamiliar faces for identity. Despite this, the use of photographic ID is widespread. In this study we ask whether it is possible to improve face matching performance by replacing single photographs on ID documents with multiple photos or an average image of the bearer. In 3 experiments we compare photo-to-photo matching with photo-to-average matching (where the average is formed from multiple photos of the same person) and photo-to-array matching (where the array comprises separate photos of the same person). We consistently find an accuracy advantage for average images and photo arrays over single photos, and show that this improvement is driven by performance in match trials. In the final experiment, we find a benefit of 4-image arrays relative to average images for unfamiliar faces, but not for familiar faces. We propose that conventional photo-ID format can be improved, and discuss this finding in the context of face recognition more generally.
Publisher: Springer Science and Business Media LLC
Date: 09-07-2014
DOI: 10.3758/S13423-013-0475-3
Abstract: People are typically poor at matching the identity of unfamiliar faces from photographs. This observation has broad implications for face matching in operational settings (e.g., border control). Here, we report significant improvements in face matching ability following feedback training. In Experiment 1, we show cumulative improvement in performance on a standard test of face matching ability when participants were provided with trial-by-trial feedback. More important, Experiment 2 shows that training benefits can generalize to novel, widely varying, unfamiliar face images for which no feedback is provided. The transfer effect specifically benefited participants who had performed poorly on an initial screening test. These findings are discussed in the context of existing literature on unfamiliar face matching and perceptual training. Given the reliability of the performance enhancement and its generalization to erse image sets, we suggest that feedback training may be useful for face matching in occupational settings.
Publisher: American Psychological Association (APA)
Date: 2008
DOI: 10.1037/A0013464
Abstract: Eyewitness memory is known to be fallible. We describe 3 experiments that aim to establish baseline performance for recognition of unfamiliar faces. In Experiment 1, viewers were shown live actors or photos (targets), and then immediately presented with arrays of 10 faces (test items). Asked whether the target was present among the test items, and if so to identify the person, participants showed poor performance levels (roughly 70% accurate). Furthermore, there was no difference between immediate memory for a live person and photograph. In Experiment 2, the same targets and test items were presented simultaneously, and participants were asked to perform a matching task. Again, performance was poor (roughly 68% accurate), with no difference between matching photos and live people. In the final experiment, viewers were asked to match a live person to a single photograph. Even under these conditions, performance was poor (c. 85%), with no advantage over matching 2 photographs. We suggest that problems of eyewitness identification may involve difficulties in initial encoding of unfamiliar faces, in addition to problems of memory for an event.
Publisher: Public Library of Science (PLoS)
Date: 25-03-2015
Publisher: Center for Open Science
Date: 28-04-2021
Abstract: We present an expanded version of a widely used measure of unfamiliar face matching ability, the Glasgow Face Matching Test (GFMT). The GFMT2 is created using the same source database as the original test but makes five key improvements. First, the test items include variation in head angle, pose, expression and subject-to-camera distance, making the new test more difficult and more representative of challenges in everyday face identification tasks. Second, short and long versions of the test each contain two forms that are calibrated to be of equal difficulty, allowing repeat tests to be performed to examine effects of training interventions. Third, the short form tests contain no repeating face identities, thereby removing any confounding effects of familiarity that may have been present in the original test. Fourth, separate short versions are created to target exceptionally high performing or exceptionally low performing in iduals using established psychometric principles. Fifth, all tests are implemented in an executable program, allowing them to be administered automatically. All tests are available free for scientific use via www.gfmt2.org.
Publisher: Elsevier BV
Date: 06-2018
Publisher: American Psychological Association (APA)
Date: 2001
Publisher: SAGE Publications
Date: 02-1994
DOI: 10.1080/14640749408401146
Abstract: In this study we examine the relationship between objective aspects of facial appearance and facial “distinctiveness”. Specifically, we examine whether the extent to which a face deviates from “average” correlates with rated distinctiveness and measures of memorability. We find that, provided the faces are rated with hair concealed, reasonable correlations can be achieved between their physical deviation and their rated distinctiveness. More modest correlations are obtained between physical deviation and the extent to which faces are remembered, either correctly or falsely, after previous study. Furthermore, memory ratings obtained to “target” faces when they have been previously seen (i.e. “hits”) do not show the expected negative correlation with the scores obtained to the same faces when acting as distractors (i.e. “false positives”), though each correlates with rated distinctiveness. This confirms the theory of Vokey and Read (1992) that the typicality/distinctiveness dimension can be broken down into two orthogonal components: “memorability” and “context-free familiarity”.
Publisher: Elsevier BV
Date: 09-2006
DOI: 10.1016/J.BRAINRES.2006.06.066
Abstract: We investigated immediate repetition effects of sequentially presented famous face pairs. The first face (F1) was presented masked or unmasked and preceded the second face (F2) with different SOAs (84 ms vs. 500 ms). Participants judged F2 with regard to either semantic category (actor vs. singer indirect task) or perceptual match with F1 (same vs. different direct task). Repetition shortened RT for unmasked but not for masked F1 conditions. In event-related brain potentials (ERPs), unmasked repetition effects were influenced by task and SOA and consisted a modulation of an occipitotemporal N170, an inferior temporal N250r (200-300 ms), a central-parietal N400 (300-500 ms), and a parietal P600 (500-800 ms). An early occipital negativity (onset approximately 100 ms) was present at the 84-ms SOA but diminished in the 500-ms SOA condition, probably reflecting a fast decaying iconic memory trace. Masked repetition effects in the indirect task were limited to a significant early (100-150 ms) prefrontal/lateral frontal and central-parietal modulation, and a strong trend for a reduced N170 litude. This suggests that masked repetition modulated early visual processing but did not influence processes beyond approximately 200 ms that reflect the access to facial representations and semantic information for people.
Publisher: Elsevier BV
Date: 11-2001
DOI: 10.1016/S0042-6989(01)00186-9
Abstract: Human subjects perform poorly at matching different images of unfamiliar faces. When images are taken by different capture devices (cameras), matching is difficult for human perceivers and also for automatic systems. We test an automatic face recognition system based on principal components analysis (PCA) and compare its performance with that of human subjects tested on the same set of images. A number of variants of the PCA system are compared, using different matching metrics and different numbers of components. PCA performance critically depends on the choice of distance metric, with a Mahalanobis metric consistently outperforming a Euclidean metric. Under optimal conditions, the automatic PCA system exceeds human performance on the same images. We hypothesise that unfamiliar face recognition may be mediated by processes corresponding to rather simple functions of the inputs.
Publisher: Elsevier BV
Date: 12-2002
DOI: 10.1016/S0010-0277(02)00172-5
Abstract: Covert face recognition has previously been thought to produce only very short-lasting effects. In this study we demonstrate that manipulating subjects' attentional load affects explicit, but not implicit memory for faces, and that implicit effects can persist over much longer intervals than is normally reported. Subjects performed letter-string tasks of high vs. low perceptual load (Lavie, N. (1995). Perceptual load as a necessary condition for selective attention. Journal of Experimental Psychology: Human Perception and Perfomance. 21, 451-468.), while ignoring task-irrelevant celebrity faces. Memory for the faces was then assessed using (a) a surprise recognition test for the celebrities' names, and (b) repetition priming in a face familiarity task. The load manipulation strongly influenced explicit recognition memory, but had no effect on repetition priming from the same items. Moreover, faces from the high load condition produced the same amount of priming whether they were explicitly remembered or not. This result resolves a long-standing anomaly in the face recognition literature, and is discussed in relation to covert processing in prosopagnosia.
Publisher: Public Library of Science (PLoS)
Date: 13-02-2019
Publisher: Public Library of Science (PLoS)
Date: 22-09-2010
Publisher: The Royal Society
Date: 2019
DOI: 10.1098/RSOS.180772
Publisher: American Association for the Advancement of Science (AAAS)
Date: 15-08-2008
Abstract: Contrary to the suggestion of Deng et al ., image registration reduced face-recognition accuracy when orced from the averaging procedure. Average-to-photo mapping generalizes beyond specific photographs, and averaging either gallery images or probe images can improve the match. The alternative protocol suggested by the authors is unsuitable because it evaluates face-matching algorithms, not face representations, and relies on standard image sets.
Publisher: SAGE Publications
Date: 07-2015
Abstract: Face recognition is a remarkable human ability, which underlies a great deal of people’s social behavior. In iduals can recognize family members, friends, and acquaintances over a very large range of conditions, and yet the processes by which they do this remain poorly understood, despite decades of research. Although a detailed understanding remains elusive, face recognition is widely thought to rely on configural processing, specifically an analysis of spatial relations between facial features (so-called second-order configurations). In this article, we challenge this traditional view, raising four problems: (1) configural theories are underspecified (2) large configural changes leave recognition unharmed (3) recognition is harmed by nonconfigural changes and (4) in separate analyses of face shape and face texture, identification tends to be dominated by texture. We review evidence from a variety of sources and suggest that failure to acknowledge the impact of familiarity on facial representations may have led to an overgeneralization of the configural account. We argue instead that second-order configural information is remarkably unimportant for familiar face recognition.
Publisher: SAGE Publications
Date: 05-2017
DOI: 10.1080/17470218.2015.1136656
Abstract: Research on face learning has tended to use sets of images that vary systematically on dimensions such as pose and illumination. In contrast, we have proposed that exposure to naturally varying images of a person may be a critical part of the familiarization process. Here, we present two experiments investigating face learning with “ambient images”—relatively unconstrained photos taken from internet searches. Participants learned name and face associations for unfamiliar identities presented in high or low within-person variability—that is, images of the same person returned by internet search on their name (high variability) versus different images of the same person taken from the same event (low variability). In Experiment 1 we show more accurate performance on a speeded name verification task for identities learned in high than in low variability, when the test images are completely novel photos. In Experiment 2 we show more accurate performance on a face matching task for identities previously learned in high than in low variability. The results show that exposure to a large range of within-person variability leads to enhanced learning of new identities.
Publisher: Center for Open Science
Date: 06-2017
Abstract: Human voices are extremely variable: The same person can sound very different depending on whether they are speaking, laughing, shouting or whispering. In order to successfully recognise someone from their voice, a listener needs to be able to generalise across these different vocal signals ('telling people together'). However, in most studies of voice identity processing to date, the substantial within-person variability has been eliminated through the use of highly controlled stimuli, thus focussing on how we tell people apart. We argue that this obscures our understanding of voice identity processing by controlling away an essential feature of vocal stimuli that may include diagnostic information. In this paper, we propose that we need to extend the focus of voice identity research to account for both 'telling people together' as well as 'telling people apart'. That is, we must account for whether, and to what extent, listeners can overcome within-person variability to obtain a stable percept of person identity from vocal cues. To do this, our theoretical and methodological frameworks need to be adjusted to explicitly include the study of within-person variability.
Publisher: SAGE Publications
Date: 12-05-2021
DOI: 10.1177/17470218211017902
Abstract: Matching unfamiliar faces is a well-studied task, apparently capturing important everyday decisions such as ID checks. In typical laboratory studies, participants make same/different judgements to pairs of faces, presented in isolation and without context. However, it has recently become clear that matching faces embedded in documents (e.g., passports and driving licences) induces a bias, resulting in elevated levels of “same person” responses. While practically important, it remains unclear whether this bias arises due to expectations induced by the ID cards or interference between textual information and faces. Here, we observe the same bias when faces are embedded in blank (i.e., non-authoritative) cards carrying basic personal information, but not when the same information is presented alongside a face without the card (Experiments 1 and 2). Cards bearing unreadable text (blurred or in an unfamiliar alphabet) do not induce the bias, but those bearing arbitrary (non-biographical) words do (Experiments 3 and 4). The results suggest a complex basis for the effect, relying on multiple factors which happen to converge in photo-ID.
Publisher: Springer Science and Business Media LLC
Date: 26-12-2015
Publisher: Informa UK Limited
Date: 04-1994
Publisher: Wiley
Date: 13-11-2014
DOI: 10.1111/BJOP.12103
Abstract: Matching unfamiliar faces is known to be difficult. Here, we ask whether performance can be improved by asking viewers to work in pairs, a manipulation known to increase accuracy for low-level visual discrimination tasks. Across four experiments we consistently find that face matching accuracy is higher for pairs of viewers than for in iduals. This 'pairs advantage' is generally driven by adopting the response of the higher scoring partner. However, when the task becomes difficult, both partners' performance is improved by working in a pair. In two experiments, we find evidence that working in a pair can lead to subsequent improvements in in idual performance, specifically for viewers whose accuracy is initially low. The pairs' technique therefore offers the opportunity for substantial improvements in face matching performance, along with an added training benefit.
Publisher: Elsevier BV
Date: 1994
DOI: 10.1016/0028-3932(94)90074-4
Abstract: In a dual task experiment, subjects were instructed to perform a single finger tapping task whilst either, concurrently memorizing words appearing randomly on a screen in front of them, or concurrently memorizing the positions of the words. Analysis of percentage change scores revealed a significant tapping hand by task interaction subjects showed a significantly larger left- than right-hand decrement when the task required memorizing the positions of the words and a non-significantly larger right- than left-hand decrement when the task required memorizing the words. These results suggest that the nature of the task demands determined the pattern of interference observed.
Publisher: SAGE Publications
Date: 10-2015
DOI: 10.1080/17470218.2014.1003949
Abstract: We are usually able to recognize novel instances of familiar faces with little difficulty, yet recognition of unfamiliar faces can be dramatically impaired by natural within-person variability in appearance. In a card-sorting task for facial identity, different photos of the same unfamiliar face are often seen as different people. Here we report two card-sorting experiments in which we manipulate whether participants know the number of identities present. Without constraints, participants sort faces into many identities. However, when told the number of identities present, they are highly accurate. This minimal contextual information appears to support viewers in “telling faces together”. In Experiment 2 we show that exposure to within-person variability in the sorting task improves performance in a subsequent face-matching task. This appears to offer a fast route to learning generalizable representations of new faces.
Publisher: Informa UK Limited
Date: 12-1993
DOI: 10.1080/09658219308258248
Abstract: In this paper we present an interactive activation and competition (IAC) model of name recognition. This is an extension of a previous account of name retrieval (Burton & Bruce, 1992) and is based on a functional model due to Valentine, Bredart, Lawson, and Ward (1991). Several empirical effects of name recognition are simulated: (1) names that are known are read faster than names that are unknown (2) common names are read faster than rare names and (3) rare names are recognised as familiar faster than common names. The simulations demonstrate that these complex effects can arise as a natural consequence of the architecture of the IAC model. Finally, we explore a modification of the Valentine et al. functional model, and conclude that the model as originally proposed is best able to account for the available data.
Publisher: SAGE Publications
Date: 10-05-2023
DOI: 10.1177/17470218231169952
Abstract: What constitutes a “threatening tone of voice”? There is currently little research exploring how listeners infer threat, or the intention to cause harm, from speakers’ voices. Here, we investigated the influence of key linguistic variables on these evaluations (Study 1). Results showed a trend for voices perceived to be lower in pitch, particularly those of male speakers, to be evaluated as sounding more threatening and conveying greater intent to harm. We next investigated the evaluation of multimodal stimuli comprising voices and faces varying in perceived dominance (Study 2). Visual information about the speaker’s face had a significant effect on threat and intent ratings. In both experiments, we observed a relatively low level of agreement among in idual listeners’ evaluations, emphasising idiosyncrasy in the ways in which threat and intent-to-harm are perceived. This research provides a basis for the perceptual experience of a “threatening tone of voice,” along with an exploration of vocal and facial cue integration in social evaluation.
Publisher: American Psychological Association (APA)
Date: 2012
DOI: 10.1037/A0025782
Abstract: Four experiments investigated the role of verbal processing in the recognition of pictures of faces and objects. We used (a) a stimulus-encoding task where participants learned sequentially presented pictures in control, articulatory suppression, and describe conditions and then engaged in an old-new picture recognition test and (b) a poststimulus-encoding task where participants learned the stimuli without any secondary task and then either described or not a single item from memory before the recognition test. The main findings were as follows: First, verbalization influenced picture recognition. Second, there were contrasting influences of verbalization on the recognition of faces, compared with objects, that were driven by (a) the stage of processing during which verbalization took place (as assessed by the stimulus-encoding and poststimulus-encoding tasks), (b) whether verbalization was subvocal (whereby one goes through the motions of speaking but without making any sound) or overt, and (c) stimulus familiarity. During stimulus encoding there was a double dissociation whereby subvocal verbalization interfered with the recognition of faces but not objects, while overt verbalization benefited the recognition of objects but not faces. In addition, stimulus familiarity provided an independent and beneficial influence on performance. Post stimulus encoding, overt verbalization interfered with the recognition of both faces and objects, and this interference was apparent for unfamiliar but not familiar stimuli. Together these findings extend work on verbalization to picture recognition and place important parameters on stimulus and task constraints that contribute to contrasting beneficial and detrimental effects of verbalization on recognition memory.
Publisher: Elsevier BV
Date: 03-2018
Publisher: Wiley
Date: 23-07-2009
DOI: 10.1111/J.1551-6709.2009.01035.X
Abstract: Significant advances have been made in understanding human face recognition. However, a fundamental aspect of this process, how faces are located in our visual environment, is poorly understood and little studied. Here we examine the role of color in human face detection. We demonstrate that detection performance declines when color information is removed from faces, regardless of whether the surrounding scene context is rendered in color. Furthermore, faces rendered in unnatural colors are hard to detect, suggesting a role beyond simple segmentation. When faces are presented such that half the surface is colored appropriately, and half unnaturally, performance declines. This suggests that observers are not simply using the presence of skin color "patches" to detect faces. Rather, our data suggest that detection operates via a face template combining diagnostic color and face-shape information. These findings are consistent with color-template approaches used in some computer-based face detection systems.
Publisher: Springer Science and Business Media LLC
Date: 02-2010
DOI: 10.3758/BRM.42.1.286
Publisher: Elsevier BV
Date: 11-2008
DOI: 10.1016/J.VISRES.2008.09.001
Abstract: When faces are turned upside-down, many aspects of face processing are severely disrupted. Here we report an instance where this face inversion effect is not found. In a visual cueing paradigm an inverted face was paired with an inverted object in a cue display, followed by a target in one of the cue locations (Experiment 1). Responses were faster to face-cued targets, indicating an attention bias for inverted faces. When upright and inverted face cues were paired in Experiment 2, no attention bias for either cue type was found, suggesting that attention was drawn equally to both types of stimuli. Despite this, attention could be biased selectively toward upright or inverted faces in Experiment 3, by manipulating the predictiveness of either type of cue, which shows that observers can distinguish upright and inverted faces under these conditions. A fourth experiment provided a replication of Experiment 2 with an extended stimulus set and increased task demands. These findings suggest that visual attributes that can influence the allocation of an observer's attention to faces are available in both upright and inverted orientations.
Publisher: Elsevier BV
Date: 10-2016
DOI: 10.1016/J.CORTEX.2016.08.008
Abstract: A full understanding of face recognition will involve identifying the visual information that is used to discriminate different identities and how this is represented in the brain. The aim of this study was to explore the importance of shape and surface properties in the recognition and neural representation of familiar faces. We used image morphing techniques to generate hybrid faces that mixed shape properties (more specifically, second order spatial configural information as defined by feature positions in the 2D-image) from one identity and surface properties from a different identity. Behavioural responses showed that recognition and matching of these hybrid faces was primarily based on their surface properties. These behavioural findings contrasted with neural responses recorded using a block design fMRI adaptation paradigm to test the sensitivity of Haxby et al.'s (2000) core face-selective regions in the human brain to the shape or surface properties of the face. The fusiform face area (FFA) and occipital face area (OFA) showed a lower response (adaptation) to repeated images of the same face (same shape, same surface) compared to different faces (different shapes, different surfaces). From the behavioural data indicating the critical contribution of surface properties to the recognition of identity, we predicted that brain regions responsible for familiar face recognition should continue to adapt to faces that vary in shape but not surface properties, but show a release from adaptation to faces that vary in surface properties but not shape. However, we found that the FFA and OFA showed an equivalent release from adaptation to changes in both shape and surface properties. The dissociation between the neural and perceptual responses suggests that, although they may play a role in the process, these core face regions are not solely responsible for the recognition of facial identity.
Publisher: Elsevier BV
Date: 12-1999
DOI: 10.1016/S0042-6989(99)00109-1
Abstract: Face recognition in photographic positive and negative was examined in a same/different matching task in five lighting direction conditions using untextured 3-D laser-scanned faces. The lighting directions were +60, +30, 0, -30 and -60 degrees, where negative values represent bottom lighting and positive values represent top lighting. Recognition performance was better for faces in positive than in negative when lighting directions were at +60 degrees. In one experiment, the same effect was also found at +30 degrees. However, faces in negative were recognized better than positive when the direction was -60 degrees. There was no difference in recognition performance when the lighting direction was 0 and -30 degrees. These results confirm that the effect of lighting direction can be a determinant of the photographic negative effect. Positive faces, which normally appear to be top-lit, may be difficult to recognize in negative partly because of the accompanying change in apparent lighting direction to bottom-lit.
Publisher: Springer Science and Business Media LLC
Date: 27-06-2018
Publisher: SAGE Publications
Date: 08-1995
DOI: 10.1080/14640749508401415
Abstract: Following exposure to 30 four-digit numbers containing an invariant “3”, subjects are found falsely to recognize novel four-digit numbers containing this invariant (positives) in preference to novel numbers that do not contain the invariant (negatives). Despite this false recognition, they are generally unable to report the rule relating test positives to the positives seen during the learning phase. This finding has been taken to show implicit learning of a rule. Two experiments are reported here which show that it is not necessary to learn this rule in order to perform at above-chance levels on this test. Most of the effect can be explained in terms of the rejection of particularly distinctive test items that are more prevalent in the test negatives. This rejection appears to be mediated by knowledge that is potentially explicit as opposed to implicit, and we present tentative evidence that it is rule-based as opposed to analogic.
Publisher: SAGE Publications
Date: 14-11-2018
Abstract: Forgetting someone’s name is a common failure of memory, and often occurs despite being able to recognise that person’s face. This gives rise to the widespread view that memory for names is generally worse than memory for faces. However, this everyday error confounds stimulus class (faces vs. names) with memory task: recognition versus recall. Here we compare memory for faces and names when both are tested in the same recognition memory framework. Contrary to the common view, we find a clear advantage for names over faces. Across three experiments, we show that recognition of previously unfamiliar names exceeds recognition of previously unfamiliar faces. This advantage persists, even when the same face pictures are repeated at learning and test—a picture-memory task known to produce high levels of performance. Differential performance between names and faces disappears in recognition memory for familiar people. The results are discussed with reference to representational complexity and everyday memory errors.
Publisher: Elsevier BV
Date: 02-1991
DOI: 10.1016/0010-0277(91)90049-A
Abstract: Eight experiments are reported showing that subjects can remember rather subtle aspects of the configuration of facial features to which they have earlier been exposed. Subjects saw several slightly different configurations (formed by altering the relative placement of internal features of the face) of each of ten different faces, and they were asked to rate the apparent age and masculinity-femininity of each. Afterwards, subjects were asked to select from pairs of faces the configuration which was identical to one previously rated. Subjects responded strongly to the central or "prototypical" configuration of each studied face where this was included as one member of each test pair, whether or not it had been studied (Experiments 1, 2 and 4). Subjects were also quite accurate at recognizing one of the previously encountered extremes of the series of configurations that had been rated (Experiment 3), but when unseen prototypes were paired with seen exemplars subjects' performance was at chance (Experiment 5). Prototype learning of face patterns was shown to be stronger than that for house patterns, though both classes of patterns were affected equally by inversion (Experiment 6). The final two experiments demonstrated that preferences for the prototype could be affected by instructions at study and by whether different exemplars of the same face were shown consecutively or distributed through the study series. The discussion examines the implications of these results for theories of the representation of faces and for instance-based models of memory.
Publisher: SAGE Publications
Date: 13-05-2021
DOI: 10.1177/03010066211014016
Abstract: One of the best-known phenomena in face recognition is the other-race effect, the observation that own-race faces are better remembered than other-race faces. However, previous studies have not put the magnitude of other-race effect in the context of other influences on face recognition. Here, we compared the effects of (a) a race manipulation (own-race/other-race face) and (b) a familiarity manipulation (familiar/unfamiliar face) in a 2 × 2 factorial design. We found that the familiarity effect was several times larger than the race effect in all performance measures. However, participants expected race to have a larger effect on others than it actually did. Face recognition accuracy depends much more on whether you know the person’s face than whether you share the same race.
Publisher: SAGE Publications
Date: 06-2017
Abstract: The idea that most of us are good at recognizing faces permeates everyday thinking and is widely used in the research literature. However, it is a correct characterization only of familiar-face recognition. In contrast, the perception and recognition of unfamiliar faces can be surprisingly error-prone, and this has important consequences in many real-life settings. We emphasize the variability in views of faces encountered in everyday life and point out how neglect of this important property has generated some misleading conclusions. Many approaches have treated image variability as unwanted noise, whereas we show how studies that use and explore the implications of image variability can drive substantial theoretical advances.
Publisher: Wiley
Date: 08-1994
Publisher: Springer Science and Business Media LLC
Date: 23-09-2019
DOI: 10.1186/S41235-019-0193-0
Abstract: We present a series of experiments on visual search in a highly complex environment, security closed-circuit television (CCTV). Using real surveillance footage from a large city transport hub, we ask viewers to search for target in iduals. Search targets are presented in a number of ways, using naturally occurring images including their passports and photo ID, social media and custody images/videos. Our aim is to establish general principles for search efficiency within this realistic context. Across four studies we find that providing multiple photos of the search target consistently improves performance. Three different photos of the target, taken at different times, give substantial performance improvements by comparison to a single target. By contrast, providing targets in moving videos or with biographical context does not lead to improvements in search accuracy. We discuss the multiple-image advantage in relation to a growing understanding of the importance of within-person variability in face recognition.
Publisher: Elsevier BV
Date: 1990
DOI: 10.1016/0028-3932(90)90013-E
Abstract: In a study of selective hemisphere activation, the performance of 30 subjects given either a "local" or "global" priming activity before and after a monoaurally presented chord analysis task was compared with an unprimed control group. The hypothesis was that ear advantage scores on the chords task would show increased left hemisphere involvement following local priming and increased right hemisphere involvement following global priming. The results failed to support this hypothesis. All subjects, however, regardless of priming condition, showed a very strong practice effect represented by a shift from a weak initial left ear (right hemisphere) advantage towards a significant right ear (left hemisphere) advantage (P less than 0.00006). The findings suggest that although the local/global priming activity did not lead to selective hemisphere activation, repeated exposure to the chords task resulted in increased use of analytic left hemisphere processing strategies as subjects became familiar with its processing requirements.
Publisher: Wiley
Date: 25-05-2007
DOI: 10.1111/J.1556-4029.2007.00458.X
Abstract: Anthropometry can be used in certain circumstances to facilitate comparison of a photograph of a suspect with that of the potential offender from surveillance footage. Experimental research was conducted to determine whether anthropometry has a place in forensic practice in confirming the identity of a suspect from a surveillance video. We examined an existing database of photographic lineups, where one video image was compared against 10 photographs, which has previously been used in psychological research. Target (1) and test (10) photos were of high quality, although taken with a different camera. The anthropometric landmarks of right and left ectocanthions, nasion, and stomion were chosen, and proportions and angle values between these landmarks were measured to compare target with test photos. Results indicate that these measurements failed to accurately identify targets. There was also no indication that any of the landmarks made a better comparison than another. It was concluded that, for these landmarks, this method does not generate the consistent results necessary for use as evidence in a court of law.
Publisher: Springer Science and Business Media LLC
Date: 07-12-2016
DOI: 10.3758/S13428-016-0837-7
Abstract: We describe InterFace, a software package for research in face recognition. The package supports image warping, reshaping, averaging of multiple face images, and morphing between faces. It also supports principal components analysis (PCA) of face images, along with tools for exploring the "face space" produced by PCA. The package uses a simple graphical user interface, allowing users to perform these sophisticated image manipulations without any need for programming knowledge. The program is available for download in the form of an app, which requires that users also have access to the (freely available) MATLAB Runtime environment.
Publisher: SAGE Publications
Date: 17-12-2018
Abstract: Humans are remarkably accurate at recognizing familiar faces, whereas their ability to recognize, or even match, unfamiliar faces is much poorer. However, previous research has failed to identify neural correlates of this striking behavioral difference. Here, we found a clear difference in brain potentials elicited by highly familiar faces versus unfamiliar faces. This effect starts 200 ms after stimulus onset and reaches its maximum at 400 to 600 ms. This sustained-familiarity effect was substantially larger than previous candidates for a neural familiarity marker and was detected in almost all participants, representing a reliable index of high familiarity. Whereas its scalp distribution was consistent with a generator in the ventral visual pathway, its modulation by repetition and degree of familiarity suggests an integration of affective and visual information.
Publisher: Elsevier BV
Date: 11-2019
DOI: 10.1016/J.CORTEX.2019.06.004
Abstract: In everyday life we usually recognise personally familiar faces efficiently and without apparent effort. This study examined to which extent the neural processes involved in recognising personally familiar faces depend on attentional resources by analysing event-related brain potentials. In two experiments, participants were presented with multiple ambient images of highly personally familiar and unfamiliar faces and pictures of butterflies, with a letter string superimposed on each image. Their task was either to indicate when a butterfly occurred (effectively ignoring the letter strings) or to indicate whether each letter string contained the letter X or N. Attentional resource load was manipulated in the letter task by presenting the target among different distractor letters (high load Experiment 1) or by using only a single repeated letter in each string (low load Experiment 2). ERPs revealed more negative litudes for familiar relative to unfamiliar faces under both high and low load conditions, both in the N250, reflecting the activation of perceptual face representations, and in the subsequent Sustained Familiarity Effect (SFE). Nonetheless, while the magnitude of the N250 effect was not substantially affected by attentional load, the SFE was still present but reduced in the high relative to the low load experiment. These findings suggest that perceptual face representations are activated independent of the demands of a competing task. However, the subsequent SFE, presumably reflecting more sustained activation needed to access identity-specific knowledge that can guide potential interactions, strongly relies on the availability of attentional resources.
Publisher: Wiley
Date: 30-10-2013
DOI: 10.1002/ACP.2965
Publisher: Wiley
Date: 05-2005
Publisher: SAGE Publications
Date: 18-06-2013
Abstract: We investigated whether acutely induced anxiety modifies the ability to match photographed faces. Establishing the extent to which anxiety affects face-matching accuracy is important because of the relevance of face-matching performance to critical security-related applications. Participants ( N = 28) completed the Glasgow Face Matching Test twice, once during a 20-min inhalation of medical air and once during a similar inhalation of air enriched with 7.5% CO 2 , which is a validated method for inducing acute anxiety. Anxiety degraded performance, but only with respect to hits, not false alarms. This finding provides further support for the dissociation between the ability to accurately identify a genuine match between faces and the ability to identify the lack of a match. Problems with the accuracy of facial identification are not resolved even when viewers are presented with a good photographic image of a face, and identification inaccuracy may be heightened when viewers are experiencing acute anxiety.
Publisher: Wiley
Date: 2006
DOI: 10.1002/ACP.1243
Publisher: No publisher found
Date: 2001
Publisher: Elsevier BV
Date: 07-2009
DOI: 10.1016/J.VISRES.2009.05.012
Abstract: The ability to detect faces in visual scenes is little understood. Across three experiments we examined whether particular facial views (for ex le those revealing a pair of eyes) facilitate detection while observers are searching for faces in complex visual scenes. Viewers' performance was equivalent for faces shown in frontal and mid-profile pose, but declined in profile (Experiment 1). These differences persisted when only half the face was shown, so that one eye was visible in frontal and profile view but both eyes were preserved in mid-frontal faces (Experiment 2). The same pattern was found when only the upper region of a face appeared in visual scenes, but the presentation of lower half faces eliminated all differences (Experiment 3). These findings demonstrate that the upper face mediates detection across different views, but 'a pair of eyes' cannot explain differences in detectability.
Publisher: Cambridge University Press (CUP)
Date: 03-1994
Publisher: SAGE Publications
Date: 13-08-2017
Abstract: As faces become familiar, we come to rely more on their internal features for recognition and matching tasks. Here, we assess whether this same pattern is also observed for a card sorting task. Participants sorted photos showing either the full face, only the internal features, or only the external features into multiple piles, one pile per identity. In Experiments 1 and 2, we showed the standard advantage for familiar faces—sorting was more accurate and showed very few errors in comparison with unfamiliar faces. However, for both familiar and unfamiliar faces, sorting was less accurate for external features and equivalent for internal and full faces. In Experiment 3, we asked whether external features can ever be used to make an accurate sort. Using familiar faces and instructions on the number of identities present, we nevertheless found worse performance for the external in comparison with the internal features, suggesting that less identity information was available in the former. Taken together, we show that full faces and internal features are similarly informative with regard to identity. In comparison, external features contain less identity information and produce worse card sorting performance. This research extends current thinking on the shift in focus, both in attention and importance, toward the internal features and away from the external features as familiarity with a face increases.
Publisher: Elsevier BV
Date: 1996
DOI: 10.1016/S0028-3932(96)00064-4
Abstract: An experiment is reported which examines same-different comparison of dichotically presented, two-tone chords to a probe. A prediction of a fast 'same' response indicative of holistic processing was tested. Stimuli and probes were systematically related by similarity relationships established in a previous experiment. No evidence was found for fast 'same' responding overall or on either ear, but a right ear advantage for the making of difficult 'different' decisions was found for accuracy. It is argued that the concepts of analytic and holistic processing may require some redefinition and a preliminary account is offered in terms of Krueger's [23] noisy operator model.
Publisher: Wiley
Date: 30-03-2015
DOI: 10.1111/COGS.12231
Abstract: Research in face recognition has tended to focus on discriminating between in iduals, or "telling people apart." It has recently become clear that it is also necessary to understand how images of the same person can vary, or "telling people together." Learning a new face, and tracking its representation as it changes from unfamiliar to familiar, involves an abstraction of the variability in different images of that person's face. Here, we present an application of principal components analysis computed across different photos of the same person. We demonstrate that people vary in systematic ways, and that this variability is idiosyncratic-the dimensions of variability in one face do not generalize well to another. Learning a new face therefore entails learning how that face varies. We present evidence for this proposal and suggest that it provides an explanation for various effects in face recognition. We conclude by making a number of testable predictions derived from this framework.
Publisher: Springer Science and Business Media LLC
Date: 11-1989
DOI: 10.3758/BF03208149
Abstract: Mark and Todd (1983) reported an experiment in which the cardioidal strain transformation was extended to three dimensions and applied to a three-dimensional (3-D) representation of the head of a 15-year-old girl in a direction that made the transformed head appear younger to the vast majority of their subjects. The experiments reported here extend this research in order to examine whether subjects are indeed detecting cardioidal strain in three dimensions, rather than detecting changes in head slant or making 2-D comparisons of the shape of the occluding contour. Three-dimensional surfaces were obtained by measuring a real head manually (Experiment 1) and with a laser scanner (Experiment 2), and transformed to different age levels using the 3-D strain transformation described by Mark and Todd (1983). There were no statistically significant differences in the accuracy with which relative age judgments could be made in response to pairs of profiles, pairs of 3/4 views, or pairs of mixed views (profile plus 3/4 view), suggesting that subjects can indeed extract the cardioidal strain level of the head in three dimensions. However, an additional effect that emerged in these studies was that judgments were crucially affected by the instructions given to subjects, which suggests that factors other than cardioidal strain are important in making judgments about rich data structures.
Publisher: MIT Press - Journals
Date: 04-2009
Abstract: We used ERPs to investigate neural correlates of face learning. At learning, participants viewed video clips of unfamiliar people, which were presented either with or without voices providing semantic information. In a subsequent face-recognition task (four trial blocks), learned faces were repeated once per block and presented interspersed with novel faces. To disentangle face from image learning, we used different images for face repetitions. Block effects demonstrated that engaging in the face-recognition task modulated ERPs between 170 and 900 msec poststimulus onset for learned and novel faces. In addition, multiple repetitions of different exemplars of learned faces elicited an increased bilateral N250. Source localizations of this N250 for learned faces suggested activity in fusiform gyrus, similar to that found previously for N250r in repetition priming paradigms [Schweinberger, S. R., Pickering, E. C., Jentzsch, I., Burton, A. M., & Kaufmann, J. M. Event-related brain potential evidence for a response of inferior temporal cortex to familiar face repetitions. Cognitive Brain Research, 14, 398–409, 2002]. Multiple repetitions of learned faces also elicited increased central–parietal positivity between 400 and 600 msec and caused a bilateral increase of inferior–temporal negativity (& msec) compared with novel faces. Semantic information at learning enhanced recognition rates. Faces that had been learned with semantic information elicited somewhat less negative litudes between 700 and 900 msec over left inferior–temporal sites. Overall, the findings demonstrate a role of the temporal N250 ERP in the acquisition of new face representations across different images. They also suggest that, compared with visual presentation alone, additional semantic information at learning facilitates postperceptual processing in recognition but does not facilitate perceptual analysis of learned faces.
Publisher: Elsevier BV
Date: 05-1991
DOI: 10.1016/0010-0277(91)90041-2
Abstract: An implementation of Bruce and Young's (1986) functional model of face recognition is used to examine patterns of covert face recognition previously reported in a prosopagnosic patient, PH. Although PH is unable to recognize overly the faces of people known to him, he shows normal patterns of face processing when tested indirectly. A simple manipulation of one set of connections in the implemented model induces behaviour consistent with patterns of results from PH obtained in semantic priming and interference tasks. We compare this account with previous explanations of covert recognition and demonstrate that the implemented model provides the most natural and parsimonious account available. Two further patients are discussed who show deficits in person perception. The first (MS) is prosopagnosic but shows no covert recognition. The second (ME) is not prosopagnosic, but cannot access semantic information relating to familiar people. The model provides an account of recognition impairments which is sufficiently general also to be useful in describing these patients.
Publisher: Elsevier BV
Date: 06-2021
Publisher: Elsevier BV
Date: 04-2001
DOI: 10.1016/S0042-6989(01)00002-5
Abstract: Pictures of facial expressions from the Ekman and Friesen set (Ekman, P., Friesen, W. V., (1976). Pictures of facial affect. Palo Alto, California: Consulting Psychologists Press) were submitted to a principal component analysis (PCA) of their pixel intensities. The output of the PCA was submitted to a series of linear discriminant analyses which revealed three principal findings: (1) a PCA-based system can support facial expression recognition, (2) continuous two-dimensional models of emotion (e.g. Russell, J. A. (1980). A circumplex model of affect. Journal of Personality and Social Psychology, 39, 1161-1178) are reflected in the statistical structure of the Ekman and Friesen facial expressions, and (3) components for coding facial expression information are largely different to components for facial identity information. The implications for models of face processing are discussed.
Publisher: Wiley
Date: 11-1989
Publisher: Springer Science and Business Media LLC
Date: 25-12-2011
Publisher: SAGE Publications
Date: 2016
DOI: 10.1080/17470218.2015.1017513
Abstract: Matching unfamiliar faces is a difficult task. Here we ask whether it is possible to improve performance by providing multiple images to support matching. In two experiments we observe that accuracy improves as viewers are provided with additional images on which to base their match. This technique leads to fast learning of an in idual, but the effect is identity-specific: Despite large improvements in viewers’ ability to match a particular person's face, these improvements do not generalize to other faces. Experiment 2 demonstrated that trial-by-trial feedback provided no additional benefits over the provision of multiple images. We discuss these results in terms of familiar and unfamiliar face processing and draw out some implications for training regimes.
Publisher: SAGE Publications
Date: 10-2002
DOI: 10.1080/02724980244000189
Abstract: Semantic priming in person recognition has been studied extensively. In a typical experiment, participants are asked to make a familiarity decision to target items that have been immediately preceded by related or unrelated primes. Facilitation is usually observed from related primes, and this priming is equivalent across stimulus domains (i.e., faces and names prime one another equally). Structural models of face recognition (e.g., IAC: Burton, Bruce, Johnston, 1990) accommodate these effects by proposing a level of person identity nodes (PINs) at which recognition routes converge, and which allow access to a common pool of semantics. We present three experiments that examine semantic priming for different decisions. Priming for a semantic decision (e.g., British/American?) shows exactly the same pattern that is normally observed for a familiarity decision. The pattern is equivalent for name and face recognition. However, no semantic priming is observed when participants are asked to make a sex decision. These results constrain future models of face processing and are discussed with reference to current theories of semantic priming.
Publisher: No publisher found
Date: 2002
DOI: 10.1016/S0028-3932(02)00050-7
Abstract: We investigated repetition priming in the recognition of famous people by recording event-related brain potentials (ERPs) and reaction times (RTs). Participants performed speeded two-choice responses depending on whether or not a stimulus showed a famous person. In Experiment 1, a facilitation was found in RTs to famous (but not to unfamiliar) faces when primed by the same face shown in an earlier priming phase of the experiment. In ERPs, an influence of repetition priming was observed neither for the N170 nor for a temporal N250 component which in previous studies had been shown to be sensitive to immediate face repetitions. ERPs to primed unfamiliar faces were more negative over right occipitotemporal areas than those to unprimed faces, but this effect was specific for repetitions of the same image, consistent with recent findings. In contrast, ERPs to primed familiar faces were more positive than those to unprimed faces at parietal sites from 500-600 ms after face onset, and these priming effects were comparable regardless of whether the same or a different image of the celebrity had served as prime. In Experiment 2, similar results were found for name recognition-a facilitation in RTs to primed familiar but not unfamiliar names, and a parietal positivity to primed names around 500-600 ms. ERP repetition effects showed comparable topographies for faces and names, consistent with the idea of a common underlying source. With reference to current models of face recognition, we suggest that these ERP repetition effects for familiar stimuli reflect a change in post-perceptual representations for people, rather than a neural correlate of recognition at a perceptual level.
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 10-04-2015
DOI: 10.1167/15.4.1
Abstract: Research on ensemble encoding has found that viewers extract summary information from sets of similar items. When shown a set of four faces of different people, viewers merge identity information from the exemplars into a representation of the set average. Here, we presented sets containing unconstrained images of the same identity. In response to a subsequent probe, viewers recognized the exemplars accurately. However, they also reported having seen a merged average of these images. Importantly, viewers reported seeing the matching average of the set (the average of the four presented images) more often than a nonmatching average (an average of four other images of the same identity). These results were consistent for both simultaneous and sequential presentation of the sets. Our findings support previous research suggesting that viewers form representations of both the exemplars and the set average. Given the unconstrained nature of the photographs, we also provide further evidence that the average representation is invariant to several high-level characteristics.
Publisher: Springer Science and Business Media LLC
Date: 25-06-2018
Publisher: Informa UK Limited
Date: 15-10-2023
Publisher: Cambridge University Press (CUP)
Date: 08-2000
DOI: 10.1017/S0140525X00273354
Abstract: Distributed representations can be distributed in very many ways. The specific choice of representation for a specific model is based on considerations unique to the area of study. General statements about the effectiveness of distributed models are therefore of little value. The popularity of these models is discussed, particularly with respect to reporting conventions.
Publisher: SAGE Publications
Date: 02-2017
DOI: 10.1080/17470218.2016.1173076
Abstract: Developmental prosopagnosia (DP) is commonly referred to as ‘face blindness’, a term that implies a perceptual basis to the condition. However, DP presents as a deficit in face recognition and is diagnosed using memory-based tasks. Here, we test face identification ability in six people with DP, who are severely impaired on face memory tasks, using tasks that do not rely on memory. First, we compared DP to control participants on a standardized test of unfamiliar face matching using facial images taken on the same day and under standardized studio conditions ( Glasgow Face Matching Test GFMT). Scores for DP participants did not differ from normative accuracy scores on the GFMT. Second, we tested face matching performance on a test created using images that were sourced from the Internet and so varied substantially due to changes in viewing conditions and in a person's appearance ( Local Heroes Test LHT). DP participants showed significantly poorer matching accuracy on the LHT than control participants, for both unfamiliar and familiar face matching. Interestingly, this deficit is specific to ‘match’ trials, suggesting that people with DP may have particular difficulty in matching images of the same person that contain natural day-to-day variations in appearance. We discuss these results in the broader context of in idual differences in face matching ability.
Publisher: American Psychological Association (APA)
Date: 08-2022
DOI: 10.1037/XLM0001063
Abstract: Humans excel in familiar face recognition, but often find it hard to make identity judgements of unfamiliar faces. Understanding of the factors underlying the substantial benefits of familiarity is at present limited, but the effect is sometimes qualified by the way in which a face is known-for ex le, personal acquaintance sometimes gives rise to stronger familiarity effects than exposure through the media. Given the different quality of personal versus media knowledge, for ex le in one's emotional response or level of interaction, some have suggested qualitative differences between representations of people known personally or from media exposure. Alternatively, observed differences could reflect quantitative differences in the level of familiarity. We present 4 experiments investigating potential contributory influences to face familiarity effects in which observers view pictures showing their friends, favorite celebrities, celebrities they dislike, celebrities about whom they have expressed no opinion, and their own face. Using event-related potential indices with high temporal resolution and multiple highly varied everyday ambient images as a strong test of face recognition, we focus on the N250 and the later Sustained Familiarity Effect (SFE). All known faces show qualitatively similar responses relative to unfamiliar faces. Regardless of personal- or media-based familiarity, N250 reflects robust visual representations, successively refined over increasing exposure, while SFE appears to reflect the amount of identity-specific semantic information known about a person. These modulations of visual and semantic representations are consistent with face recognition models which emphasize the degree of familiarity but do not distinguish between different types of familiarity. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
Publisher: Center for Open Science
Date: 12-09-2019
Abstract: Making new acquaintances necessitates learning to recognise previously unfamiliar faces. In the current study, we investigated this process by staging real-world social interactions between actors and the participants. Participants (N=22) completed a face-matching behavioural task in which they matched photographs of the actors (whom they had yet to meet), or faces similar to the actors (henceforth called foils). Participants were then scanned using functional magnetic resonance imaging (fMRI) while viewing photographs of actors and foils. Immediately after exiting the scanner, participants met the actors for the first time and interacted with them for ten minutes. On subsequent days, participants completed a second behavioural experiment and then a second fMRI scan. Prior to each session, actors again interacted with the participants for ten minutes. Behavioural results showed that social interactions improved performance accuracy when matching actor photographs, but not foil photographs. The fMRI analysis focused on face-selective areas in the right hemisphere, including the fusiform face area (FFA), occipital face area (OFA), posterior superior temporal sulcus (pSTS) and amygdala, as well as the right hippoc us. Results showed a greater response to actor photographs than foil photographs across all regions of interest after social interactions had occurred. Our results demonstrate that short social interactions were sufficient to learn and discriminate previously unfamiliar in iduals. Moreover, these learning effects were present in brain areas involved in face processing and memory.
Publisher: Elsevier BV
Date: 08-1998
DOI: 10.1016/S0042-6989(97)00439-2
Abstract: The performance of two different computer systems for representing faces was compared with human ratings of similarity and distinctiveness, and human memory performance, on a specific set of face images. The systems compared were a graph-matching system (Lades M, Vorbrüggen JC, Buhmann J, Lage J, von der Malsburg C, Würtz RP, Konen W. IEEE., Trans Comput 1993 :300-311.) and coding based on principal components analysis (PCA) of image pixels (Turk M, Pentland A. J Cognitive Neurosci 1991 :71-86.). Replicating other work, the PCA-based system produced very much better performance at recognising faces, and higher correlations with human performance with the same images, when the images were initially standardised using a morphing procedure and separate analysis of 'shape' and 'shape-free' components then combined. Both the graph-matching and (shape + shape-free) PCA systems were equally able to recognise faces shown with changed expressions, both provided reasonable correlations with human ratings and memory data, and there were also correlations between the facial similarities recorded by each of the computer models. However, comparisons with human similarity ratings of faces with and without the hair visible, and prediction of memory performance with and without alteration in face expressions, suggested that the graph-matching system was better at capturing aspects of the appearance of the face, while the PCA-based system seemed better at capturing aspects of the appearance of specific images of faces.
Publisher: Association for Research in Vision and Ophthalmology (ARVO)
Date: 02-2009
DOI: 10.1167/9.2.7
Publisher: SAGE Publications
Date: 24-09-2019
Abstract: Unfamiliar face matching is a difficult task. In typical experiments, viewers see isolated face pairs and have to decide whether they show the same or different people. Recent research shows that embedding faces into passports introduces a response bias, such that viewers are more likely to accept two pictures as showing the same person. Here, we investigate the cause of this bias. In a series of experiments, we vary the apparent authority of the identity documents, testing passports, driving licences, and student ID. By comparison to isolated face matching, the results show a bias towards responding same person for each document type. However, when ID information (name, date of birth, etc.) was removed from documents, the induced bias disappeared. We conclude that bias does not rely on perceived authority, but instead seems to occur only in the presence of identifying information, despite that being task irrelevant.
Publisher: SAGE Publications
Date: 10-2003
DOI: 10.1068/P5021
Abstract: We examined whether prior knowledge of a person affects the visual processes involved in learning a face. In two experiments, subjects were taught to associate human faces with characters they knew (from the TV show The Simpsons) or characters they did not (novel names). In each experiment, knowledge of the character predicted performance in a recognition memory test, relying only on old/new confidence ratings. In experiment 1, we established the technique and showed that there is a face-learning advantage for known people, even when face items are counterbalanced for familiarity across the experiment. In experiment 2 we replicated the effect in a setting which discouraged subjects from attending more to known than unknown people, and eliminated any visual association between face stimuli and a character from The Simpsons. We conclude that prior knowledge about a person can enhance learning of a new face.
Publisher: American Psychological Association (APA)
Date: 2010
DOI: 10.1037/A0019057
Abstract: Person detection is an important prerequisite of social interaction, but is not well understood. Following suggestions that people in the visual field can capture a viewer's attention, this study examines the role of the face and the body for person detection in natural scenes. We observed that viewers tend first to look at the center of a scene, and only then to fixate on a person. When a person's face was rendered invisible in scenes, bodies were detected as quickly as faces without bodies, indicating that both are equally useful for person detection. Detection was optimized when face and body could be seen, but observers preferentially fixated faces, reinforcing the notion of a prominent role for the face in social perception. These findings have implications for claims of attention capture by faces in that they demonstrate a mediating influence of body cues and general scanning principles in natural scenes.
Publisher: SAGE Publications
Date: 08-2004
DOI: 10.1080/02724980343000657
Abstract: Adults are better at recognizing familiar faces from the internal facial features (eyes, nose, mouth) than from the external facial features (hair, face outline). However, previous research suggests that this “internal advantage” does not appear until relatively late in childhood, and some studies suggest that children rely on external features to recognize all faces, whether familiar or not. We use a matching task to examine face processing in 7–8– and 10–11–year–old children. We use a design in which all face stimuli can be used as familiar items (for participants who are classmates) and unfamiliar items (for participants from a different school). Using this design, we find an internal feature advantage for matching familiar faces, for both groups of children. The same children were then shown the external and internal features of their classmates and were asked to name or otherwise identify them. Again, both age groups identified more of their classmates correctly from the internal than the external features. This is the first time an internal advantage has been reported in this age group. Results suggest that children as young as 7 process faces in the same way as do adults, and that once procedural difficulties are overcome, the standard effects of familiarity are observed.
Publisher: Informa UK Limited
Date: 06-2003
Publisher: American Psychological Association (APA)
Date: 03-2022
Publisher: Elsevier BV
Date: 03-2021
Publisher: Elsevier BV
Date: 12-2011
DOI: 10.1016/J.COGNITION.2011.08.001
Abstract: Psychological studies of face recognition have typically ignored within-person variation in appearance, instead emphasising differences between in iduals. Studies typically assume that a photograph adequately captures a person's appearance, and for that reason most studies use just one, or a small number of photos per person. Here we show that photographs are not consistent indicators of facial appearance because they are blind to within-person variability. Crucially, this within-person variability is often very large compared to the differences between people. To investigate variability in photos of the same face, we collected images from the internet to s le a realistic range for each in idual. In Experiments 1 and 2, unfamiliar viewers perceived images of the same person as being different in iduals, while familiar viewers perfectly identified the same photos. In Experiment 3, multiple photographs of any in idual formed a continuum of good to bad likeness, which was highly sensitive to familiarity. Finally, in Experiment 4, we found that within-person variability exceeded between-person variability in attractiveness. These observations are critical to our understanding of face processing, because they suggest that a key component of face processing has been ignored. As well as its theoretical significance, this scale of variability has important practical implications. For ex le, our findings suggest that face photographs are unsuitable as proof of identity.
Publisher: Springer Science and Business Media LLC
Date: 1987
DOI: 10.1007/BF00142925
Publisher: Elsevier BV
Date: 11-2002
DOI: 10.1016/S0926-6410(02)00142-8
Abstract: We investigated immediate repetition effects in the recognition of famous faces by recording event-related brain potentials (ERPs) and reaction times (RTs). Participants recognized celebrities' faces that were preceded by either the same picture, a different picture of the same celebrity, or a different famous face. Face repetition caused two distinct ERP modulations. Repetitions elicited a strong modulation of an N250 component ( approximately 200-300 ms) over inferior temporal regions. The N250 modulation showed a degree of image specificity in that it was still significant for repetitions across different pictures, though reduced in litude. ERPs to repeated faces were also more positive than those to unprimed faces at parietal sites from 400 to 600 ms, but these later effects were largely independent of whether the same or a different image of the celebrity had served as prime. Finally, no influence of repetition was observed for the N170 component. Dipole source modelling suggested that the N250 repetition effect (N250r) may originate from the fusiform gyrus. In contrast, source localisation of the N170 implicated a significantly more posterior location, corresponding to a lateral occipitotemporal source outside the fusiform gyrus.
Publisher: Informa UK Limited
Date: 08-2008
Publisher: Public Library of Science (PLoS)
Date: 26-02-2016
Publisher: Wiley
Date: 29-09-2021
DOI: 10.1111/PSYP.13950
Abstract: Human observers recognize the faces of people they know efficiently and without apparent effort. Consequently, recognizing a familiar face is often assumed to be an automatic process beyond voluntary control. However, there are circumstances in which a person might seek to hide their recognition of a particular face. The present study therefore used event‐related potentials (ERPs) and a classifier based on logistic regression to determine if it is possible to detect whether a viewer is familiar with a particular face, regardless of whether the participant is willing to acknowledge it or not. In three experiments, participants were presented with highly variable “ambient” images of personally familiar and unfamiliar faces, while performing an incidental butterfly detection task (Experiment 1), an explicit familiarity judgment task (Experiment 2), and a concealed familiarity task in which they were asked to deny familiarity with one truly known facial identity while acknowledging familiarity with a second known identity (Experiment 3). In all three experiments, we observed substantially more negative ERP litudes at occipito‐temporal electrodes for familiar relative to unfamiliar faces starting approximately 200 ms after stimulus onset. Both the earlier N250 familiarity effect, reflecting visual recognition of a known face, and the later sustained familiarity effect, reflecting the integration of visual with additional identity‐specific information, were similar across experiments and thus independent of task demands. These results were further supported by the classifier analysis. We conclude that ERP correlates of familiar face recognition are largely independent of voluntary control and discuss potential applications in forensic settings.
Publisher: Elsevier BV
Date: 02-2018
Publisher: Informa UK Limited
Date: 02-2001
Publisher: Elsevier BV
Date: 02-2020
DOI: 10.1016/J.COGPSYCH.2019.101260
Abstract: We can recognise people that we know across their lifespan. We see family members age, and we can recognise celebrities across long careers. How is this possible, despite the very large facial changes that occur as people get older? Here we analyse the statistical properties of faces as they age, s ling photos of the same people from their 20s to their 70s. Across a number of simulations, we observe that in iduals' faces retain some idiosyncratic physical properties across the adult lifespan that can be used to support moderate levels of age-independent recognition. However, we found that models based exclusively on image-similarity only achieved limited success in recognising faces across age. In contrast, more robust recognition was achieved with the introduction of a minimal top-down familiarisation procedure. Such models can incorporate the within-person variability associated with a particular in idual to show a surprisingly high level of generalisation, even across the lifespan. The analysis of this variability reveals a powerful statistical tool for understanding recognition, and demonstrates how visual representations may support operations typically thought to require conceptual properties.
Publisher: Wiley
Date: 19-06-2018
DOI: 10.1111/BJOP.12318
Abstract: Unfamiliar face matching is a surprisingly difficult task, yet we often rely on people's matching decisions in applied settings (e.g., border control). Most attempts to improve accuracy (including training and image manipulation) have had very limited success. In a series of studies, we demonstrate that using smiling rather than neutral pairs of images brings about significant improvements in face matching accuracy. This is true for both match and mismatch trials, implying that the information provided through a smile helps us detect images of the same identity as well as distinguishing between images of different identities. Study 1 compares matching performance when images in the face pair display either an open-mouth smile or a neutral expression. In Study 2, we add an intermediate level, closed-mouth smile, to identify the effect of teeth being exposed, and Study 3 explores face matching accuracy when only information about the lower part of the face is available. Results demonstrate that an open-mouth smile changes the face in an idiosyncratic way which aids face matching decisions. Such findings have practical implications for matching in the applied context where we typically use neutral images to represent ourselves in official documents.
Publisher: American Psychological Association (APA)
Date: 2009
DOI: 10.1037/0096-1523.35.1.108
Abstract: The direction of another person's gaze is difficult to ignore when presented at the center of attention. In 6 experiments, perception of unattended gaze was investigated. Participants made directional (left-right) judgments to gazing-face or pointing-hand targets, which were accompanied by a distractor face or hand. Processing of the distractor was assessed via congruency effects on target response times. Congruency effects were found from the direction of distractor hands but not from the direction of distractor gazes (Experiment 1). This pattern persisted even when distractor sizes were increased to compensate for their peripheral presentation (Experiments 2 and 5). In contrast, congruency effects were exerted by profile heads (Experiments 3 and 4). In Experiment 6, isolated eye region distractors produced no congruency effects, even when they were presented near the target. These results suggest that, unlike other facial information, gaze direction cannot be perceived outside the focus of attention.
Publisher: Wiley
Date: 09-2001
Abstract: We examine the proposal that social problem-solving in depression may be improved with the retrieval of specific autobiographical memories. Social problem-solving was assessed with the Means-End Problem-Solving task (MEPS Platt & Spivack, 1975a). Depressed and non-depressed participants were required either to retrieve a specific memory prior to generating a MEPS solution (primed condition) or to report on the memories retrieved during MEPS performance after giving their MEPS solution (non-primed condition). Participants also judged whether the memories retrieved had been helpful or unhelpful for the process of solution generation. In both depressed and non-depressed in iduals, priming increased specific memory retrieval but did not improve MEPS performance. An interaction between depression and priming revealed that priming increased the retrieval of helpful memories in the depressed s le. Specificity is not, in itself, a sufficient retrieval aim for successful social problem-solving. However specific memory priming may be beneficial in depression because it facilitates the recognition of memories which are helpful for problem-solving.
Publisher: Wiley
Date: 11-10-2016
DOI: 10.1002/ACP.3281
Publisher: American Psychological Association (APA)
Date: 2003
Abstract: Identifying a criminal captured on conventional security video typically requires matching poor-quality video footage against a high-quality photograph. The authors examined the consequence of such a large discrepancy in image quality. Recognition and matching performance of this incongruent-quality condition was compared with that of a congruent one, in which a high-quality photograph was reduced to a low-quality video. Recognition memory was little affected by this manipulation, whereas matching performance of the incongruent condition enjoyed occasional advantage. The results show that person identification can tolerate a large discrepancy between image qualities of matching stimuli when one of the images is of poor quality.
Publisher: Elsevier BV
Date: 09-2019
DOI: 10.1016/J.COGNITION.2019.04.027
Abstract: A paradoxical finding from recent studies of face perception is that observers are error-prone and inconsistent when judging the identity of unfamiliar faces, but nevertheless reasonably consistent when judging traits. Our aim is to understand this difference. Using everyday ambient images of faces, we show that visual image statistics can predict observers' consensual impressions of trustworthiness, attractiveness and dominance, which represent key dimensions of evaluation in leading theoretical accounts of trait judgement. In Study 1, image statistics derived from ambient images of multiple face identities were able to account for 51% of the variance in consensual impressions of entirely novel ambient images. Shape properties were more effective predictors than surface properties, but a combination of both achieved the best results. In Study 2 and Study 3, statistics derived from multiple images of a particular face achieved the best generalisation to new images of that face, but there was nonetheless significant generalisation between images of the faces of different in iduals. Hence, whereas idiosyncratic variability across different images of the same face is sufficient to cause substantial problems in judging the identities of unfamiliar faces, there are consistencies between faces which are sufficient to support (to some extent) consensual trait judgements. Furthermore, much of this consistency can be captured in simple operational models based on image statistics.
Publisher: Wiley
Date: 14-06-2011
DOI: 10.1111/J.2044-8295.2011.02039.X
Abstract: The Bruce and Young (1986) framework makes a number of important distinctions between the types of representation needed to recognize a familiar face. Here, we return to these, focussing particularly on face recognition units. We argue that such representations need to incorporate idiosyncratic within-person variability, asking questions such as 'What counts as a picture of Harrison Ford?'. We describe a mechanism for achieving this, and discuss the relation between image variability and episodic face memories, in the context of behavioural and neurophysiological data.
Publisher: Wiley
Date: 11-2013
DOI: 10.1002/ACP.2971
Publisher: Informa UK Limited
Date: 02-11-2017
Publisher: SAGE Publications
Date: 16-09-2023
Publisher: SAGE Publications
Date: 10-2002
DOI: 10.1080/02724980244000242
Abstract: Three experiments are reported, which examine generation of knowledge in the McGeorge and Burton (1990) invariant learning task. In this task, participants are exposed to 30 four-digit numbers containing an invariant “3”. Following this participants then demonstrate a preference for novel numbers containing this invariant over numbers without it. Despite above-chance performance on this pseudo-memory test, participants appear unable to verbalize anything pertinent to the invariant. Here we introduce a novel version of this task, relying on generation of items rather than a preference test. We argue that this new task engages different processing resources, resulting in different patterns of performance. In Experiment 1, invariant learning was demonstrated using a novel fragment completion test. Experiment 2 found that suppressing articulation inhibited learning, implying that this test task accesses phonological knowledge. It is suggested that using the fragment completion test engages different processing resources during test from those in a preference test. Experiment 3 reinforces this position by demonstrating that knowledge appears to transfer across surface features, a result that seems to contradict recent findings by Stadler, Warren, and Lesch (2000). A resolution is offered, drawing on episodic accounts of implicit learning.
Publisher: American Psychological Association (APA)
Date: 03-2017
DOI: 10.1037/REV0000048
Publisher: Elsevier BV
Date: 09-2014
Publisher: Informa UK Limited
Date: 05-2008
Publisher: Elsevier BV
Date: 06-1990
Publisher: SAGE Publications
Date: 11-2001
DOI: 10.1080/713756003
Abstract: An interactive activation and competition account (Burton, Bruce, & Johnston, 1990) of the semantic priming effect in person recognition studies relies on the fact that primes and targets (people) have semantic information in common. However, recent investigations into the type of relationship needed to mediate the semantic priming effect have suggested that the prime and target must be close associates (e.g., Barry, Johnston, & Scanlan, 1998 Young, Flude, Hellawell, & Ellis, 1994). A review of these and similar papers suggests the possibility of a small but non-reliable effect based purely on categorial relationships. Experiment 1 provided evidence that when participants were asked to make a name familiarity decision it was possible to boost this small categorial effect when multiple (four) primes were presented prior to the target name. Results from Experiment 2 indicated that the categorial effect was not due to the particular presentation times of the primes. This boosted categorial effect was shown to cross domains (names to faces) in Experiment 3 and persist in Experiment 4 when the task involved naming the target face. The similarity of the pattern of results produced by the associative priming effect and this boosted categorial effect suggests that the two may be due to the same underlying mechanism in semantic memory.
Publisher: Wiley
Date: 05-2001
Publisher: Elsevier BV
Date: 2018
DOI: 10.1016/J.COGNITION.2017.09.001
Abstract: Photographs of people are commonly said to be 'good likenesses' or 'poor likenesses', and this is a concept that we readily understand. Despite this, there has been no systematic investigation of what makes an image a good likeness, or of which cognitive processes are involved in making such a judgement. In three experiments, we investigate likeness judgements for different types of images: natural images of film stars (Experiment 1), images of film stars from specific films (Experiment 2), and iconic images and face averages (Experiment 3). In all three experiments, participants rated images for likeness and completed speeded name verification tasks. We consistently show that participants are faster to identify images which they have previously rated asa good likeness compared to a poor likeness. We also consistently show that the more familiar we are with someone, the higher likeness rating we give to all images of them. A key finding is that our perception of likeness is idiosyncratic (Experiments 1 and 2), and can be tied to our specific experience of each in idual (Experiment 2). We argue that likeness judgements require a comparison between the stimulus and our own representation of the person, and that this representation differs according to our prior experience with that in idual. This has theoretical implications for our understanding of how we represent familiar people, and practical implications for how we go about selecting images for identity purposes such as photo-ID.
Location: United Kingdom of Great Britain and Northern Ireland
Location: United Kingdom of Great Britain and Northern Ireland
Location: United Kingdom of Great Britain and Northern Ireland
Location: United Kingdom of Great Britain and Northern Ireland
Location: United Kingdom of Great Britain and Northern Ireland
No related grants have been discovered for Anthony Michael Burton.