Biomimetic Ultra-Thin Compound-Eye Vision Sensor. With the recent advances in microelectronic fabrication technology, it becomes possible today to fabricate paper-thin imaging systems. The proposed research will target the development of such systems to enable the concept of 'stick-on cameras'. Examples of potential applications for this new imaging technology include head-mounted camera patches for rescue workers, smart credit card capable of identifying its user by fingerprint technology, disc ....Biomimetic Ultra-Thin Compound-Eye Vision Sensor. With the recent advances in microelectronic fabrication technology, it becomes possible today to fabricate paper-thin imaging systems. The proposed research will target the development of such systems to enable the concept of 'stick-on cameras'. Examples of potential applications for this new imaging technology include head-mounted camera patches for rescue workers, smart credit card capable of identifying its user by fingerprint technology, discrete monitoring of venues, preventing driver's drowsiness inside a car but also assisting in medical diagnosis and minimally invasive surgery. This leading edge research will enhance the reputation of Australia as a leader in frontier technologies.Read moreRead less
Linkage Infrastructure, Equipment And Facilities - Grant ID: LE0668448
Funder
Australian Research Council
Funding Amount
$150,000.00
Summary
See Hear! Multimodal Recording and Analysis Facility. High resolution recording and analysis will exploit the full potential of motion capture with progress towards automatic recognition of gesture and, eventually, real-time systems. Automatic tracking and recognition systems are in high demand and the interlacing of data from multiple modes is now computationally achievable. SeeHear! will be coded using techniques in multimodal fusion - tracking of bodies will be enhanced by locating and recogn ....See Hear! Multimodal Recording and Analysis Facility. High resolution recording and analysis will exploit the full potential of motion capture with progress towards automatic recognition of gesture and, eventually, real-time systems. Automatic tracking and recognition systems are in high demand and the interlacing of data from multiple modes is now computationally achievable. SeeHear! will be coded using techniques in multimodal fusion - tracking of bodies will be enhanced by locating and recognizing facial features, and a learning algorithm used to classify gesture from patterns of force and physiological response. In the future, full interactivity will be achieved by interconnecting visual and auditory data with a flow on to applications in the performing arts, rehabilitation and security.Read moreRead less