ORCID Profile
0000-0001-8299-4798
Current Organisation
University of Adelaide
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
Publisher: Center for Open Science
Date: 23-07-2020
Abstract: Although emotion expressions are typically dynamic and include the whole person, much emotion recognition research uses static, posed facial expressions. In this study, we created a stimulus set of dynamic, naturalistic expressions drawn from professional tennis matches to determine whether movement would result in better recognition. We examined participants’ judgments of static vs. dynamic expressions when viewing an isolated face, an isolated body, or a whole person. Dynamic expressions increased recognition of whether the player had won or lost the point. In addition, recognition improved when the whole person was presented as opposed to only the face or body. However, overall recognition of wins and losses was poor, with recognition for isolated faces being poorer than chance for winning players. Our findings highlight the importance of incorporating dynamic stimuli and support previous research showing that recognition of naturalistic expressions differs greatly from the commonly-used posed and isolated facial expressions of emotion. Using a wider range of naturalistic stimuli should be incorporated into future research to better understand how emotion recognition functions in daily life.
Publisher: SAGE Publications
Date: 27-01-2022
DOI: 10.1177/03057356211066964
Abstract: Starting university can be challenging for students, and emerging research indicates that music listening may be a helpful coping resource. At this stage, little is known about the music listening motivations of international and domestic university students, and whether there are differences between these two cohorts in terms of whether their music listening is an effective coping resource for increased well-being. These questions were examined with an online cross-sectional survey of first-year students at a major Australian university ( N = 475 61.9% domestic, 38.1% international). Music listening was an effective coping strategy for managing stress for 72.6% of domestic students and 59.2% of international students. The relationships between music and well-being differed between cohorts—for international students (but not domestic), higher endorsement of music listening as an effective coping strategy was associated with greater well-being. In addition, a moderated mediation analysis demonstrated that, in contrast to international students, for domestic students, more music listening was associated with more use of music for emotional reasons and decreased well-being. Students’ relationship with music as a coping resource is complex, and further research is necessary to determine the direction of these effects.
Publisher: Elsevier BV
Date: 03-2020
DOI: 10.1016/J.JECP.2019.104737
Abstract: The ability to explicitly recognize emotions develops gradually throughout childhood, and children usually have greater difficulty in recognizing emotions from the voice than from the face. However, little is known about how children integrate vocal and facial cues to recognize an emotion, particularly during mid to late childhood. Furthermore, children with an autism spectrum disorder often show a reduced ability to recognize emotions, especially when integrating emotion from multiple modalities. The current preliminary study explored the ability of typically developing children aged 7-9 years to match emotional tones of voice to facial expressions and whether this ability varies according to the level of autism-like traits. Overall, children were the least accurate when matching happy and fearful voices to faces, commonly pairing happy voices with angry faces and fearful voices with sad faces. However, the level of autism-like traits was not associated with matching accuracy. These results suggest that 7- to 9-year-old children have difficulty in integrating vocal and facial emotional expressions but that differences in cross-modal emotion matching in relation to the broader autism phenotype are not evident in this task for this age group with the current s le.
Publisher: Elsevier BV
Date: 2021
Publisher: Elsevier BV
Date: 11-2016
DOI: 10.1016/J.JECP.2016.02.012
Abstract: Recent research has indicated that language provides an important contribution to adults' conceptions of emotional expressions and their associated categories, but how language influences children's expression category acquisition has yet to be explored. Across two studies, we provide evidence that when preschoolers (2-4years) encounter a novel label, they use a process of elimination to match it with its expected expression. Children successfully used a process of elimination to match a single expression to one of several labels (Study 1) and to match a single label to one of several expressions (Study 2). These data highlight one possible mechanism that children may use to learn about the expressions they encounter and may shed light on the ways in which children's expression categories are constructed.
Publisher: Frontiers Media SA
Date: 04-2021
DOI: 10.3389/FPSYG.2021.647065
Abstract: The COVID-19 pandemic brought rapid changes to travel, learning environments, work conditions, and social support, which caused stress for many University students. Research with young people has revealed music listening to be among their most effective strategies for coping with stress. As such, this survey of 402 first-year Australian University students (73.9% female, M age = 19.6 75% domestic and 25% international) examined the effectiveness of music listening during COVID-19 compared with other stress management strategies, whether music listening for stress management was related to well-being, and whether differences emerged between domestic and international students. We also asked participants to nominate a song that helped them to cope with COVID-19 stress and analyzed its features. Music listening was among the most effective stress coping strategies, and was as effective as exercise, sleep, and changing location. Effectiveness of music listening as a coping strategy was related to better well-being but not to level of COVID-19 related stress. Although international students experienced higher levels of COVID-19 stress than domestic students, well-being was comparable in the two cohorts. Nominated songs tended to be negative in valence and moderate in energy. No correlations were found between any self-report measure and the valence and energy of nominated coping songs. These findings suggest that although domestic and international students experienced different levels of stress resulting from COVID-19, music listening remained an effective strategy for both cohorts, regardless of the type of music they used for coping.
Publisher: Elsevier BV
Date: 07-2018
Publisher: American Psychological Association (APA)
Date: 2014
DOI: 10.1037/A0036789
Abstract: Prior research has identified a facial expression for positive pride, but no expression for negative pride, hubris. In the present study, professional actors created expressions intended to convey hubris. In Study 1 (N = 52), participants were shown dynamic expressions and attributed confidence, positive valence, and positive personality traits to the positive pride expression, but conceit, neutral valence, and negative personality traits to the hubris expression. In Study 2 (N = 60), participants were more likely to attribute conceit to a dynamic hubris expression than a static one no such difference was found for positive pride.
Publisher: Informa UK Limited
Date: 06-04-2017
Publisher: Elsevier BV
Date: 2016
DOI: 10.1016/J.JECP.2015.07.016
Abstract: In a classic study, children were shown an array of facial expressions and asked to choose the person who expressed a specific emotion. Children were later asked to name the emotion in the face with any label they wanted. Subsequent research often relied on the same two tasks--choice from array and free labeling--to support the conclusion that children recognize basic emotions from facial expressions. Here five studies (N=120, 2- to 10-year-olds) showed that these two tasks produce illusory recognition a novel nonsense facial expression was included in the array. Children "recognized" a nonsense emotion (pax or tolen) and two familiar emotions (fear and jealousy) from the same nonsense face. Children likely used a process of elimination they paired the unknown facial expression with a label given in the choice-from-array task and, after just two trials, freely labeled the new facial expression with the new label. These data indicate that past studies using this method may have overestimated children's expression knowledge.
Publisher: Wiley
Date: 27-03-2013
DOI: 10.1111/BJDP.12011
Abstract: Past studies found that, for preschoolers, a story specifying a situational cause and behavioural consequence is a better cue to fear and disgust than is the facial expression of those two emotions, but the facial expressions used were static. Two studies (Study 1: N = 68, 36-68 months Study 2: N = 72, 49-90 months) tested whether this effect could be reversed when the expressions were dynamic and included facial, postural, and vocal cues. Children freely labelled emotions in three conditions: story, still face, and dynamic expression. Story remained a better cue than still face or dynamic expression for fear and disgust and also for the later emerging emotions of embarrassment and pride.
Publisher: Springer Science and Business Media LLC
Date: 16-10-2017
Publisher: Elsevier BV
Date: 11-2019
Publisher: Elsevier BV
Date: 10-2018
DOI: 10.1016/J.NEUROPSYCHOLOGIA.2018.09.004
Abstract: Energization is the process of initiating and sustaining a response over time. It has been described as one of three key "supervisory" attentional control processes associated with the frontal lobes. Attentional mechanisms, such as energization, are critical for a range of cognitive functions, such as spontaneous speech and other higher-order tasks. We aimed to investigate the process of energization in a case series of patients with progressive supranuclear palsy (PSP). Patients with a diagnosis of PSP (N = 5), patient controls with a neurodegenerative condition (Alzheimer's disease N = 3, frontotemporal dementia N = 2) and healthy older adult controls (N = 30) were assessed on a standard neuropsychological battery, including executive tasks and standard attention and language tests. Energization was investigated using word fluency tasks, s les of spontaneous speech and an experimental button-pressing concentration task. Response rates for the word fluency, spontaneous speech and concentration tasks were separated into time periods, in order to compare response rates at different points across the tasks (e.g., first 15 s vs. last 45 s in a 60 s task). Four PSP patients showed a clear response pattern indicative of a decrease in energization. Healthy and patient controls remained consistent in their responding over time. Understanding how these underlying processes are impaired in PSP can ultimately inform intervention and management strategies, and has theoretical implications for models of spoken language production.
Publisher: Elsevier BV
Date: 07-2019
DOI: 10.1016/J.BEPROC.2019.05.006
Abstract: Research examining children's understanding of emotional expressions has generally used static, isolated facial expressions presented in a non-interactive context. However, these tasks do not resemble children's experiences with expressions in daily life, where they must attend to a range of information, including others' facial expressions, movements, and the situation surrounding the expression. In this research, we examine the development of visual attention to another's emotional expressions during a live interaction. Via an eye-tracker, children (4-11 years old) and adults viewed an experimenter open a series of opaque boxes and make an expression (happiness, sadness, fear, or disgust) based on the object inside. Participants determined which of four possible objects (stickers, a broken toy, a spider, or dog poop) was in the box. We examined the proportion of the trial in which participants looked to three areas of the face (the eyes, mouth, and nose area), and the available contextual information (the box held by the experimenter, the four objects). Although children spent less time looking to the face than adults did, their pattern of visual attention within the face and to object AOIs did not differ from that of adults. Finally, for adults, increased accuracy was linked to spending less time looking to the objects whereas increased accuracy for children was not strongly linked to any emotion cue. These data indicate that although children spend less time looking to the face during live interactions than adults do, the proportion of time spent looking to areas of the face and context are generally adult-like.
Publisher: Informa UK Limited
Date: 18-12-2018
DOI: 10.1080/02699931.2018.1554554
Abstract: We examined the utility of a gaze cueing paradigm to examine sensitivity to differences among negatively valenced expressions. Participants judged target stimuli (dangerous or safe), the location of which was cued by the gaze direction of a central face. Dawel et al. reported that gaze cueing effects (faster response times on valid vs. invalid trials) were larger when the central face displayed fear than when it displayed happiness. Our aim was to determine whether this effect was specific to fear, to all threat-related expressions (fear, anger), or to all negatively valenced expressions (fear, anger, sadness, disgust) with the aim of using this protocol to study the development of implicit discrimination of negatively valenced expressions. Across five experiments in which we varied the number of models (1 vs. 4), the number of expressions (2 vs. 5), and the country of residence of participants (Canada vs. Australia) we found no evidence that the magnitude of gaze cueing effects is modulated by expression. We discuss our failure to replicate in the context of the broader literature.
Publisher: Springer Science and Business Media LLC
Date: 08-04-2022
DOI: 10.1038/S41598-022-09397-1
Abstract: Human visual systems have evolved to extract ecologically relevant information from complex scenery. In some cases, the face in the crowd visual search task demonstrates an anger superiority effect, where anger is allocated preferential attention. Across three studies ( N = 419), we tested whether facial hair guides attention in visual search and influences the speed of detecting angry and happy facial expressions in large arrays of faces. In Study 1, participants were faster to search through clean-shaven crowds and detect bearded targets than to search through bearded crowds and detect clean-shaven targets. In Study 2, targets were angry and happy faces presented in neutral backgrounds. Facial hair of the target faces was also manipulated. An anger superiority effect emerged that was augmented by the presence of facial hair, which was due to the slower detection of happiness on bearded faces. In Study 3, targets were happy and angry faces presented in either bearded or clean-shaven backgrounds. Facial hair of the background faces was also systematically manipulated. A significant anger superiority effect was revealed, although this was not moderated by the target’s facial hair. Rather, the anger superiority effect was larger in clean-shaven than bearded face backgrounds. Together, results suggest that facial hair does influence detection of emotional expressions in visual search, however, rather than facilitating an anger superiority effect as a potential threat detection system, facial hair may reduce detection of happy faces within the face in the crowd paradigm.
Publisher: Elsevier BV
Date: 09-2011
DOI: 10.1016/J.JECP.2011.03.014
Abstract: In daily experience, children have access to a variety of cues to others' emotions, including face, voice, and body posture. Determining which cues they use at which ages will help to reveal how the ability to recognize emotions develops. For happiness, sadness, anger, and fear, preschoolers (3-5 years, N = 144) were asked to label the emotion conveyed by dynamic cues in four cue conditions. The Face-only, Body Posture-only, and Multi-cue (face, body, and voice) conditions all were well recognized (M > 70%). In the Voice-only condition, recognition of sadness was high (72%), but recognition of the three other emotions was significantly lower (34%).
Publisher: Informa UK Limited
Date: 05-12-2020
DOI: 10.1080/02699931.2019.1700482
Abstract: Previous research on the development of emotion recognition in music has focused on classical, rather than popular music. Such research does not consider the impact of lyrics on judgements of emotion in music, impact that may differ throughout development. We had 172 children, adolescents, and adults (7- to 20-year-olds) judge emotions in popular music. In song excerpts, the melody of the music and the lyrics had either congruent valence (e.g. happy lyrics and melody), or incongruent valence (e.g. scared lyrics, happy melody). We also examined participants' judgements of vocal bursts, and whether emotion identification was linked to emotion lexicon. Recognition of emotions in congruent music increased with age. For incongruent music, age was positively associated with judging the emotion in music by the melody. For incongruent music with happy or sad lyrics, younger participants were more likely to answer with the emotion of the lyrics. For scared incongruent music, older adolescents were more likely to answer with the lyrics than older and younger participants. Age groups did not differ on their emotion lexicons, nor recognition of emotion in vocal bursts. Whether children use lyrics or melody to determine the emotion of popular music may depend on the emotion conveyed.
Publisher: Center for Open Science
Date: 02-10-2018
Abstract: To what extent do children believe in real, unreal, natural and supernatural figures relative to each other, and to what extent are features of culture responsible for belief? Are some figures, like Santa Claus or an alien, perceived as more real than figures like Princess Elsa or a unicorn? We categorized 13 figures into five a priori categories based on 1) whether children receive direct evidence of the figure’s existence, 2) whether children receive indirect evidence of the figure’s existence, 3) whether the figure was associated with culture-specific rituals or norms, and 4) whether the figure was explicitly presented as fictional. We anticipated that the categories would be endorsed in the following order: ‘Real People’ (a person known to the child, The Wiggles), ‘Cultural Figures’ (Santa Claus, The Easter Bunny, The Tooth Fairy), ‘Ambiguous Figures’ (Dinosaurs, Aliens), ‘Mythical Figures’ (unicorns, ghosts, dragons), and ‘Fictional Figures’ (Spongebob Squarepants, Princess Elsa, Peter Pan). In total, we analysed responses from 176 children (aged 2 - 11 years) and 56 adults for ‘how real’ they believed 13 in idual figures were (95 children were examined online by their parents, and 81 children were examined by trained research assistants). A cluster analysis, based exclusively on children’s ‘realness’ scores, revealed a structure supporting our hypotheses, and multilevel regressions revealed a sensible hierarchy of endorsement with differing developmental trajectories for each category of figures. We advance the argument that cultural rituals are a special form of testimony that influences children’s reality/fantasy distinctions, and that rituals and norms for ‘Cultural Figures’ are a powerful and under-researched factor in generating and sustaining in a child’s endorsement for a figure’s reality status. All our data and materials our publically available at osf.io/wurxy/?view_only=845c07c064af448a99cb668cd8dda0f7
Publisher: Public Library of Science (PLoS)
Date: 07-2020
Publisher: Springer Science and Business Media LLC
Date: 28-02-2018
DOI: 10.1007/S10803-018-3522-0
Abstract: The current study investigated whether those with higher levels of autism-like traits process emotional information from speech differently to those with lower levels of autism-like traits. Neurotypical adults completed the autism-spectrum quotient and an emotional priming task. Vocal primes with varied emotional prosody, semantics, or a combination, preceded emotional target faces. Prime-target pairs were congruent or incongruent in their emotional content. Overall, congruency effects were found for combined prosody-semantic primes, however no congruency effects were found for semantic or prosodic primes alone. Further, those with higher levels of autism-like traits were not influenced by the prime stimuli. These results suggest that failure to integrate emotional information across modalities may be characteristic of the broader autism phenotype.
Publisher: Elsevier BV
Date: 07-2011
Publisher: Public Library of Science (PLoS)
Date: 17-06-2020
Publisher: American Psychological Association (APA)
Date: 11-2015
DOI: 10.1037/DEV0000048
Abstract: Adults distinguish expressions of hubris from those of positive pride. To determine whether children (N = 183 78-198 months old) make a similar distinction, we asked them to attribute emotion labels and a variety of social characteristics to dynamic expressions intended to convey hubris and positive pride. Like adults, children attributed different emotion labels to the expressions, and this tendency increased with age. Girls were more likely to distinguish between the expressions than boys were. Children also associated more positive social characteristics with the expression of positive pride and more negative characteristics with the expression of hubris.
Publisher: Elsevier BV
Date: 03-2012
DOI: 10.1016/J.JECP.2011.09.004
Abstract: To chart the developmental path of children's attribution of pride to others, we presented children (4 years 0 month to 11 years 11 months of age, N=108) with video clips of head-and-face, body posture, and multi-cue (both head-and-face and body posture simultaneously) expressions that adults consider to convey pride. Across age groups, 4- and 5-year-olds did not attribute pride to any expression presented, 6- and 7-year-olds attributed pride only to the multi-cue expression, and 8- to 11-year-olds attributed pride to both the head-and-face and multi-cue expressions. Children of all ages viewed the postural expression as anger rather than pride. Developmentally, pride is first attributed only when several cues are present and only later when a single cue (head-and-face) is present.
Publisher: Cambridge University Press (CUP)
Date: 15-11-2020
DOI: 10.1017/S1355617719001097
Abstract: Language and communication are fundamental to the human experience, and, traditionally, spoken language is studied as an isolated skill. However, before propositional language (i.e., spontaneous, voluntary, novel speech) can be produced, propositional content or ‘ideas’ must be formulated. This review highlights the role of broader cognitive processes, particularly ‘executive attention’, in the formulation of propositional content (i.e., ‘ideas’) for propositional language production. Several key lines of evidence converge to suggest that the formulation of ideas for propositional language production draws on executive attentional processes. Larger-scale clinical research has demonstrated a link between attentional processes and language, while detailed case studies of neurological patients have elucidated specific idea formulation mechanisms relating to the generation, selection and sequencing of ideas for expression. Furthermore, executive attentional processes have been implicated in the generation of ideas for propositional language production. Finally, neuroimaging studies suggest that a widely distributed network of brain regions, including parts of the prefrontal and parietal cortices, supports propositional language production. Theoretically driven experimental research studies investigating mechanisms involved in the formulation of ideas are lacking. We suggest that novel experimental approaches are needed to define the contribution of executive attentional processes to idea formulation, from which comprehensive models of spoken language production can be developed. Clinically, propositional language impairments should be considered in the context of broader executive attentional deficits.
Publisher: SAGE Publications
Date: 2013
Abstract: Evidence does not support the claim that observers universally recognize basic emotions from signals on the face. The percentage of observers who matched the face with the predicted emotion (matching score) is not universal, but varies with culture and language. Matching scores are also inflated by the commonly used methods: within-subject design posed, exaggerated facial expressions (devoid of context) multiple ex les of each type of expression and a response format that funnels a variety of interpretations into one word specified by the experimenter. Without these methodological aids, matching scores are modest and subject to various explanations.
Publisher: Elsevier BV
Date: 10-2020
Publisher: Center for Open Science
Date: 29-06-2021
Abstract: Perceptions of traits (such as trustworthiness or dominance) are influenced by the emotion displayed on a face. For instance, the same in idual is reported as more trustworthy when they look happy than when they look angry. This overextension of emotional expressions has been shown with facial expression but whether this phenomenon also occurs when viewing postural expressions was unknown. We sought to examine how expressive behaviour of the body would influence judgements of traits and how sensitivity to this cue develops. In the context of a storybook, adults (N = 35) and children (aged 5 to 8 years N = 60) selected one of two partners to help face a challenge. The challenges required either a trustworthy or dominant partner. Participants chose between a partner with an emotional (happy/angry) face and neutral body or one with a neutral face and emotional body. As predicted, happy over neutral facial expressions were preferred when selecting a trustworthy partner and angry postural expressions were preferred over neutral when selecting a dominant partner. Children’s performance was not adult-like on most tasks. The results demonstrate that emotional postural expressions can also influence judgements of others’ traits, but that postural influence on trait judgements develops throughout childhood.
Publisher: Cambridge University Press (CUP)
Date: 16-03-2021
DOI: 10.1017/S0305000921000192
Abstract: Emotion can influence various cognitive processes. Communication with children often involves exaggerated emotional expressions and emotive language. Children with autism spectrum disorder often show a reduced tendency to attend to emotional information. Typically developing children aged 7 to 9 years who varied in their level of autism-like traits learned the nonsense word names of nine novel toys, which were presented with either happy, fearful, or neutral emotional cues. Emotional cues had no influence on word recognition or recall performance. Eye-tracking data showed differences in visual attention depending on the type of emotional cues and level of autism-like traits. The findings suggest that the influence of emotion on attention during word learning differs according to whether the children have lower or higher levels of autism-like traits, but this influence does not affect word learning outcomes.
Publisher: Elsevier BV
Date: 04-2019
DOI: 10.1016/J.JECP.2018.12.002
Abstract: Adults' first impressions of others are influenced by subtle facial expressions happy faces are perceived as high in trustworthiness, whereas angry faces are rated as low in trustworthiness and high in threat and dominance. Little is known about the influence of emotional expressions on children's first impressions. Here we examined the influence of subtle expressions of happiness, anger, and fear on children's implicit judgments of trustworthiness and dominance with the aim of providing novel insights about both the development of first impressions and whether children are able to utilize emotional expressions when making implicit, rather than explicit, judgments of traits. In the context of a computerized storybook, children (4- to 11-year-olds) and adults selected one of two twins (two images of the same identity displaying different emotional expressions) to help them face a challenge some challenges required a trustworthy partner, and others required a dominant partner. One twin posed a neutral expression, and the other posed a subtle emotional expression of happiness, fear, or anger. Whereas adults were more likely to select a happy partner on trust trials than on dominance trials and were more likely to select an angry partner on dominance trials than on trust trials, we found no evidence that children's choices reflected a combined influence of desirable trait and emotion. Follow-up experiments involving explicit trait judgments, explicit emotion recognition, and implicit first impression judgments in the context of intense emotional expressions provide valuable insights into the slow development of implicit trait judgments based on first impressions.
Publisher: Elsevier BV
Date: 07-2019
DOI: 10.1016/J.YHBEH.2019.04.005
Abstract: Mating strategy theories assert that women's preferences for androgen dependent traits in men are stronger when the costs of reduced paternal investment are lowest. Past research has shown that preferences for facial masculinity are stronger among nulliparous and non-pregnant women than pregnant or parous women. In two studies, we examine patterns in women's preferences for men's facial hair - likely the most visually conspicuous and sexually dimorphic of men's secondary sexual traits - when evaluating men's masculinity, dominance, age, fathering, and attractiveness. Two studies were conducted among heterosexual pregnant women, mothers, non-contractive and contraceptive users. Study 1 used a between-subjects s le (N = 2103) and found that mothers had significantly higher preferences for beards when judging fathering than all other women. Pregnant women and mothers also judged beards as more masculine and older, but less attractive, than non-contractive and contraceptive users. Parous women judged beards higher for age, masculinity and fathering, but lower for attractiveness, than nulliparous women. Irrespective of reproductive status, beards were judged as looking more dominant than clean-shaven faces. Study 2 used a within-subjects design (N = 53) among women surveyed during pregnancy and three months post-partum. Judgments of parenting skills were higher for bearded stimuli during pregnancy among women having their first baby, whereas among parous women parenting skills judgments for bearded stimuli were higher post-partum. Our results suggest that mothers are sensitive to beardedness as a masculine secondary sexual characteristic that may denote parental investment, providing evidence that women's mate preferences could reflect sexual selection for direct benefits.
Publisher: SAGE Publications
Date: 25-03-2019
Abstract: The beard is arguably one of the most obvious signals of masculinity in humans. Almost 150 years ago, Darwin suggested that beards evolved to communicate formidability to other males, but no studies have investigated whether beards enhance recognition of threatening expressions, such as anger. We found that the presence of a beard increased the speed and accuracy with which participants recognized displays of anger but not happiness (Experiment 1, N = 219). This effect was not due to negative evaluations shared by beardedness and anger or to negative stereotypes associated with beardedness, as beards did not facilitate recognition of another negative expression, sadness (Experiment 2, N = 90), and beards increased the rated prosociality of happy faces in addition to the rated masculinity and aggressiveness of angry faces (Experiment 3, N = 445). A computer-based emotion classifier reproduced the influence of beards on emotion recognition (Experiment 4). The results suggest that beards may alter perceived facial structure, facilitating rapid judgments of anger in ways that conform to evolutionary theory.
Publisher: Elsevier BV
Date: 08-2018
DOI: 10.1016/J.JECP.2018.03.001
Abstract: The majority of studies of emotion perception have relied on static isolated facial expressions. These expressions differ markedly from real-world expressions that include movement and multiple cues (e.g., bodies), leaving our understanding of how expression perception develops incomplete. We examined the looking patterns of younger children (4- and 5-year-olds), older children (8- and 9-year-olds), and adults while watching dynamic video clips or static images of four different emotional expressions: happiness, sadness, anger, and fear. Expressions were presented in three conditions: face only, body only, and whole person (face and body). Children's and adults' looking patterns were affected by whether stimuli were static or dynamic and by which cues were available. Children looked to the head less for static stimuli than for dynamic stimuli, but this difference did not emerge for adults. Children and adults attended to different expression cues when presented with static images. These results demonstrate the need for increased use of dynamic stimuli in developmental studies of expression.
Publisher: Springer Science and Business Media LLC
Date: 25-05-2021
Publisher: Elsevier BV
Date: 09-2021
Publisher: Public Library of Science (PLoS)
Date: 10-09-2013
Publisher: American Psychological Association (APA)
Date: 2011
DOI: 10.1037/A0022576
Abstract: Prior research suggested that pride is recognized only when a head and facial expression (e.g., tilted head with a slight smile) is combined with a postural expression (e.g., expanded body and arm gestures). However, these studies used static photographs. In the present research, participants labeled the emotion conveyed by four dynamic cues to pride, presented as video clips: head and face alone, body posture alone, voice alone, and an expression in which head and face, body posture, and voice were presented simultaneously. Participants attributed pride to the head and face alone, even when postural or vocal information was absent. Pride can be conveyed without body posture or voice.
No related grants have been discovered for Nicole Nelson.