ORCID Profile
0000-0003-3806-1493
Current Organisation
University of Cambridge
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
Publisher: ISCA
Date: 20-08-2017
Publisher: ACM
Date: 26-10-2015
Publisher: ACM
Date: 16-10-2016
Publisher: ACM
Date: 15-10-2018
Publisher: IEEE
Date: 09-2019
Publisher: IEEE
Date: 23-05-2022
Publisher: ISCA
Date: 15-09-2019
Publisher: ISCA
Date: 08-09-2016
Publisher: ACM
Date: 23-10-2017
Publisher: Frontiers Media SA
Date: 23-12-2021
DOI: 10.3389/FCOMP.2021.767767
Abstract: People perceive emotions via multiple cues, predominantly speech and visual cues, and a number of emotion recognition systems utilize both audio and visual cues. Moreover, the perception of static aspects of emotion (speaker's arousal level is high/low) and the dynamic aspects of emotion (speaker is becoming more aroused) might be perceived via different expressive cues and these two aspects are integrated to provide a unified sense of emotion state. However, existing multimodal systems only focus on single aspect of emotion perception and the contributions of different modalities toward modeling static and dynamic emotion aspects are not well explored. In this paper, we investigate the relative salience of audio and video modalities to emotion state prediction and emotion change prediction using a Multimodal Markovian affect model. Experiments conducted in the RECOLA database showed that audio modality is better at modeling the emotion state of arousal and video for emotion state of valence, whereas audio shows superior advantages over video in modeling emotion changes for both arousal and valence.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 07-2023
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 04-2021
Publisher: IEEE
Date: 04-2018
Location: United Kingdom of Great Britain and Northern Ireland
No related grants have been discovered for ting dang.