Linkage Infrastructure, Equipment And Facilities - Grant ID: LE0668448
Funder
Australian Research Council
Funding Amount
$150,000.00
Summary
See Hear! Multimodal Recording and Analysis Facility. High resolution recording and analysis will exploit the full potential of motion capture with progress towards automatic recognition of gesture and, eventually, real-time systems. Automatic tracking and recognition systems are in high demand and the interlacing of data from multiple modes is now computationally achievable. SeeHear! will be coded using techniques in multimodal fusion - tracking of bodies will be enhanced by locating and recogn ....See Hear! Multimodal Recording and Analysis Facility. High resolution recording and analysis will exploit the full potential of motion capture with progress towards automatic recognition of gesture and, eventually, real-time systems. Automatic tracking and recognition systems are in high demand and the interlacing of data from multiple modes is now computationally achievable. SeeHear! will be coded using techniques in multimodal fusion - tracking of bodies will be enhanced by locating and recognizing facial features, and a learning algorithm used to classify gesture from patterns of force and physiological response. In the future, full interactivity will be achieved by interconnecting visual and auditory data with a flow on to applications in the performing arts, rehabilitation and security.Read moreRead less
Towards efficient real-time generation of detectable musical macrostructure. Efficient generation of detectable large scale musical structure is needed for commercial audiovisual applications, and for creative music making. But computer mediation of music has focused elsewhere: on sound synthesis and sequencing, editing, mixing and notation. I will apply computational processes like the handling of chunks of genetic information in evolution, to generate large scale musical structure. I will con ....Towards efficient real-time generation of detectable musical macrostructure. Efficient generation of detectable large scale musical structure is needed for commercial audiovisual applications, and for creative music making. But computer mediation of music has focused elsewhere: on sound synthesis and sequencing, editing, mixing and notation. I will apply computational processes like the handling of chunks of genetic information in evolution, to generate large scale musical structure. I will control segmentation; framing of internal segments; spatialisation; and the overlaying of separable musical streams. Expert cognitive assessment of the resultant structures will be investigated, and theories of segmentation, streaming and their relationships with expression and affect developed and tested.Read moreRead less