Mobility and Location Information providing Social Equality for Blind and Vision Impaired persons. Providing reliable situational information to the blind and visually impaired (BVI) can deliver far greater independence. Confidence and autonomy will result from knowing where they are, what is in that location, how to go to the destination and the location related information. This will not only save a significant welfare costs but will also provide social equality to BVI. The underlying technolo ....Mobility and Location Information providing Social Equality for Blind and Vision Impaired persons. Providing reliable situational information to the blind and visually impaired (BVI) can deliver far greater independence. Confidence and autonomy will result from knowing where they are, what is in that location, how to go to the destination and the location related information. This will not only save a significant welfare costs but will also provide social equality to BVI. The underlying technology can also readily be extended to other socially useful and profitable applications.Read moreRead less
How, What and Who in Human Communication: Movement of Face and Voice. The aim of this project is to identify the essential characteristics of tone, affect, and identity from face and voice using a combination of signal processing, biological, and behavioural techniques in order to develop a comprehensive model of auditory-visual speech processing and communication. This research will significantly improve understanding of the basis of auditory-visual perception and production in tonal languages ....How, What and Who in Human Communication: Movement of Face and Voice. The aim of this project is to identify the essential characteristics of tone, affect, and identity from face and voice using a combination of signal processing, biological, and behavioural techniques in order to develop a comprehensive model of auditory-visual speech processing and communication. This research will significantly improve understanding of the basis of auditory-visual perception and production in tonal languages and in affective communication, facilitate links between neurophysiological processes and auditory-visual speech processing; and contribute to applications in automatic person recognition, automatic speech recognition, text-to-speech systems, and talking head aids for the hearing impaired.Read moreRead less
Special Research Initiatives - Grant ID: SR0354596
Funder
Australian Research Council
Funding Amount
$20,000.00
Summary
Perception and Action in Auditory Scenes (PAAS): Neural, Behavioural, Computational and Mechanical Systems. Auditory scenes are temporal and ephemeral yet pervasively influence human life. How humans negotiate such scenes has not been solved, a fact highlighted by attempts to build machines to respond to speech, warnings etc., in real-world situations with room reverberation, different talkers, and background noise. No one discipline can solve such problems. In this network outstanding researche ....Perception and Action in Auditory Scenes (PAAS): Neural, Behavioural, Computational and Mechanical Systems. Auditory scenes are temporal and ephemeral yet pervasively influence human life. How humans negotiate such scenes has not been solved, a fact highlighted by attempts to build machines to respond to speech, warnings etc., in real-world situations with room reverberation, different talkers, and background noise. No one discipline can solve such problems. In this network outstanding researchers from physical, medical, human, and social sciences with interests in speech, music and audition will provide insights into how humans and machines localize, recognize, interpret and produce auditory events, and advance frontier technologies, e.g., automatic speech recognition, hearing prostheses, auditory monitoring/warning systems.Read moreRead less