How, What and Who in Human Communication: Movement of Face and Voice. The aim of this project is to identify the essential characteristics of tone, affect, and identity from face and voice using a combination of signal processing, biological, and behavioural techniques in order to develop a comprehensive model of auditory-visual speech processing and communication. This research will significantly improve understanding of the basis of auditory-visual perception and production in tonal languages ....How, What and Who in Human Communication: Movement of Face and Voice. The aim of this project is to identify the essential characteristics of tone, affect, and identity from face and voice using a combination of signal processing, biological, and behavioural techniques in order to develop a comprehensive model of auditory-visual speech processing and communication. This research will significantly improve understanding of the basis of auditory-visual perception and production in tonal languages and in affective communication, facilitate links between neurophysiological processes and auditory-visual speech processing; and contribute to applications in automatic person recognition, automatic speech recognition, text-to-speech systems, and talking head aids for the hearing impaired.Read moreRead less
Special Research Initiatives - Grant ID: SR0354596
Funder
Australian Research Council
Funding Amount
$20,000.00
Summary
Perception and Action in Auditory Scenes (PAAS): Neural, Behavioural, Computational and Mechanical Systems. Auditory scenes are temporal and ephemeral yet pervasively influence human life. How humans negotiate such scenes has not been solved, a fact highlighted by attempts to build machines to respond to speech, warnings etc., in real-world situations with room reverberation, different talkers, and background noise. No one discipline can solve such problems. In this network outstanding researche ....Perception and Action in Auditory Scenes (PAAS): Neural, Behavioural, Computational and Mechanical Systems. Auditory scenes are temporal and ephemeral yet pervasively influence human life. How humans negotiate such scenes has not been solved, a fact highlighted by attempts to build machines to respond to speech, warnings etc., in real-world situations with room reverberation, different talkers, and background noise. No one discipline can solve such problems. In this network outstanding researchers from physical, medical, human, and social sciences with interests in speech, music and audition will provide insights into how humans and machines localize, recognize, interpret and produce auditory events, and advance frontier technologies, e.g., automatic speech recognition, hearing prostheses, auditory monitoring/warning systems.Read moreRead less
Filters reveal what flicker conceals: temporal processing in the human visual system. I have recently discovered a new form of camouflage using 10Hz luminance flicker. This project will quantify this effect and examine the extent to which it generalises across colour and spatial dimensions and to video sequences depicting natural scenes. This information is expected to provide foundational information to technologies relating to national security that rely on visual concealment. This research wi ....Filters reveal what flicker conceals: temporal processing in the human visual system. I have recently discovered a new form of camouflage using 10Hz luminance flicker. This project will quantify this effect and examine the extent to which it generalises across colour and spatial dimensions and to video sequences depicting natural scenes. This information is expected to provide foundational information to technologies relating to national security that rely on visual concealment. This research will examine the extent to which filtering out these camouflaging frequencies enhances our sensitivity to low temporal frequency information. This decamouflaging aspect of my research is expected to improve the clarity of digital video-based technologies including ultrasound, educational, info-tainment and defence applicationsRead moreRead less