High Performance Runtimes for Next Generation Languages. X10 is a type-safe, memory-safe programming language. This project will help make X10 a viable choice for secure software on the next generation of computer architectures. The proposed project will contribute to a better understanding of the fundamental processes that advance knowledge and facilitate the development of technological innovations (a research priority goal). By addressing a key emerging problem and consolidating Australian- ....High Performance Runtimes for Next Generation Languages. X10 is a type-safe, memory-safe programming language. This project will help make X10 a viable choice for secure software on the next generation of computer architectures. The proposed project will contribute to a better understanding of the fundamental processes that advance knowledge and facilitate the development of technological innovations (a research priority goal). By addressing a key emerging problem and consolidating Australian-based expertise in this area, the project will also enhance Australia’s capacity in frontier technologies research.Read moreRead less
Linkage Infrastructure, Equipment And Facilities - Grant ID: LE0668448
Funder
Australian Research Council
Funding Amount
$150,000.00
Summary
See Hear! Multimodal Recording and Analysis Facility. High resolution recording and analysis will exploit the full potential of motion capture with progress towards automatic recognition of gesture and, eventually, real-time systems. Automatic tracking and recognition systems are in high demand and the interlacing of data from multiple modes is now computationally achievable. SeeHear! will be coded using techniques in multimodal fusion - tracking of bodies will be enhanced by locating and recogn ....See Hear! Multimodal Recording and Analysis Facility. High resolution recording and analysis will exploit the full potential of motion capture with progress towards automatic recognition of gesture and, eventually, real-time systems. Automatic tracking and recognition systems are in high demand and the interlacing of data from multiple modes is now computationally achievable. SeeHear! will be coded using techniques in multimodal fusion - tracking of bodies will be enhanced by locating and recognizing facial features, and a learning algorithm used to classify gesture from patterns of force and physiological response. In the future, full interactivity will be achieved by interconnecting visual and auditory data with a flow on to applications in the performing arts, rehabilitation and security.Read moreRead less