Solve it or Ignore it? The Challenge of Alignment Distortion and Creating Next Generation Automatic Facial Expression Detection. The last two decades have seen an escalating interest in automating the coding of facial expressions. Despite this keen interest, the promise of computer vision systems to accurately code facial expressions in natural circumstances remains elusive. Our interdisciplinary team will research a new paradigm to account for facial alignment distortion directly rather than ai ....Solve it or Ignore it? The Challenge of Alignment Distortion and Creating Next Generation Automatic Facial Expression Detection. The last two decades have seen an escalating interest in automating the coding of facial expressions. Despite this keen interest, the promise of computer vision systems to accurately code facial expressions in natural circumstances remains elusive. Our interdisciplinary team will research a new paradigm to account for facial alignment distortion directly rather than aiming to achieve invariance to it. The project will also research new data agnostic feature compaction capabilities to enable scalable learning on the world’s largest and challenging expression dataset available to us through international collaboration. Tackling these two major open problems will make accurate coding of facial expressions in natural environments achievable.Read moreRead less
Learning Robotic Navigation and Interaction from Object-based Semantic Maps. Our project aims to develop new learning algorithms that enable robots to perform high-complexity tasks that are currently impossible. Compared to existing methods that rely on low-level sensor data, we aim to achieve this by learning from a high-level graph representation of the environment that captures semantics, affordances, and geometry. The outcome would be robots capable of using human instructions to efficiently ....Learning Robotic Navigation and Interaction from Object-based Semantic Maps. Our project aims to develop new learning algorithms that enable robots to perform high-complexity tasks that are currently impossible. Compared to existing methods that rely on low-level sensor data, we aim to achieve this by learning from a high-level graph representation of the environment that captures semantics, affordances, and geometry. The outcome would be robots capable of using human instructions to efficiently learn complex interaction and navigation behaviours that transfer to unseen environments. Our research should benefit new applications in domains of economic and societal importance that are currently too complex, unsafe, and uncertain for robot assistants, such as aged care, advanced manufacturing and domestic robotics.Read moreRead less
Human Cues for Robot Navigation. The world has many navigational cues for the benefit of humans: sign posts, maps and the wealth of information on the internet. Yet, to date, robotic navigation has made little use of this abundant symbolic information as a resource. This project will develop a robot navigation system that can navigate using information beyond the robot's range sensors by incorporating knowledge gained by reading room labels, following human route directions or interpreting maps ....Human Cues for Robot Navigation. The world has many navigational cues for the benefit of humans: sign posts, maps and the wealth of information on the internet. Yet, to date, robotic navigation has made little use of this abundant symbolic information as a resource. This project will develop a robot navigation system that can navigate using information beyond the robot's range sensors by incorporating knowledge gained by reading room labels, following human route directions or interpreting maps found on the web. This project will demonstrate the robot's navigation ability by comparing its performance with a human as it learns to find its way around campus by asking for directions, reading signs and maps, and searching the internet for clues.Read moreRead less
Omniscient face recognition for uncooperative subjects. The outcomes of this project will enable effective video surveillance technology to be developed for use by law enforcement and national security agencies. It will lead to reliable identification of humans at a distance by automatically detecting and recognising faces, for use in counter-terrorism surveillance and commercial robot-human interfaces.
Lifelong robotic navigation using visual perception. Service robots are becoming a major part of our working and personal environments, in much the same way as personal computers already have. This project will develop new methods of practical and useful robot navigation that will enable Australia's industries and services to remain internationally competitive.
One shot three-dimensional reconstruction of human anatomy and motion. This project aims to accurately estimate the three-dimensional (3D) structure of non-rigid human anatomy. Although computer vision has advanced the area of structure from motion, current approaches cannot accurately and densely reconstruct people. This project will create dense 3D reconstruction techniques which can manage non-rigid human anatomy using only two-dimensional images from medical imaging devices (X-rays and video ....One shot three-dimensional reconstruction of human anatomy and motion. This project aims to accurately estimate the three-dimensional (3D) structure of non-rigid human anatomy. Although computer vision has advanced the area of structure from motion, current approaches cannot accurately and densely reconstruct people. This project will create dense 3D reconstruction techniques which can manage non-rigid human anatomy using only two-dimensional images from medical imaging devices (X-rays and video sequences) in one shot – from a single image. This approach is expected to be used for the 3D visualisation of x-rays such as in clinical practice, human pose estimation, and 3D planning for orthopaedic minimally invasive surgery.Read moreRead less
Unlocking Mass Mobile Video Analytics with Advanced Neural Memory Networks. This project will develop neural memory architectures and dense spatial-temporal bundle adjustment to predict movement, behaviour, and perform multi-sensor fusion across large asynchronous video feeds. This capability will allow us to better interrogate and analyse mass video information recorded from the vast number of smartphones, action cameras, and surveillance cameras which exist at public events of interest. Outcom ....Unlocking Mass Mobile Video Analytics with Advanced Neural Memory Networks. This project will develop neural memory architectures and dense spatial-temporal bundle adjustment to predict movement, behaviour, and perform multi-sensor fusion across large asynchronous video feeds. This capability will allow us to better interrogate and analyse mass video information recorded from the vast number of smartphones, action cameras, and surveillance cameras which exist at public events of interest. Outcomes include the ability to ingest multiple video feeds into a dense and dynamic 3D reconstruction for knowledge representation and discovery, and analysis of events and behaviour through new spatio-temporal analytic approaches. This will offer significant benefits for video forensic analysis, policing, and emergency response.Read moreRead less
Deep reinforcement learning for discovering and visualising biomarkers. This project aims to develop novel methods for discovering and visualising optimal bio-markers from chest computed tomography images based on extensions of recently developed deep reinforcement learning techniques. The extensions proposed in this project will advance medical image analysis by allowing an efficient analysis of large dimensionality inputs in their original high resolution. In addition, this project will be the ....Deep reinforcement learning for discovering and visualising biomarkers. This project aims to develop novel methods for discovering and visualising optimal bio-markers from chest computed tomography images based on extensions of recently developed deep reinforcement learning techniques. The extensions proposed in this project will advance medical image analysis by allowing an efficient analysis of large dimensionality inputs in their original high resolution. In addition, this project will be the first approach capable of discovering previously unknown biomarkers associated with important clinical outcomes. The project will validate the approach on a real-world case study data set concerning the prediction of five-year survival of chronic disease.Read moreRead less
Automated analysis of multi-modal medical data using deep belief networks. This project will develop an improved breast cancer computer-aided diagnosis (CAD) system that incorporates mammography, ultrasound and magnetic resonance imaging. This system will be based on recently developed deep learning techniques, which have the capacity to process multi-modal data in a unified and optimal manner. The advantage of this technique is that it is able to automatically learn both the relevant features t ....Automated analysis of multi-modal medical data using deep belief networks. This project will develop an improved breast cancer computer-aided diagnosis (CAD) system that incorporates mammography, ultrasound and magnetic resonance imaging. This system will be based on recently developed deep learning techniques, which have the capacity to process multi-modal data in a unified and optimal manner. The advantage of this technique is that it is able to automatically learn both the relevant features to analyse in each modality and the hidden relationships between them. The use of deep belief networks has produced promising results in several fields, such as speech recognition, and so this project believes that our approach has the potential to improve both the sensitivity and specificity of breast cancer detection.Read moreRead less
Square Eyes or All Lies? Understanding Children's Exposure to Screens. This project will examine Australian parents’ number one concern about their children’s health and behaviour – their interactions with electronic screens. Current screen time guidelines are based on low-quality evidence and lack the nuance required to address this complex issue. This project will use innovative technology to resolve these weaknesses. Wearable cameras will measure what children are doing on screens, and where, ....Square Eyes or All Lies? Understanding Children's Exposure to Screens. This project will examine Australian parents’ number one concern about their children’s health and behaviour – their interactions with electronic screens. Current screen time guidelines are based on low-quality evidence and lack the nuance required to address this complex issue. This project will use innovative technology to resolve these weaknesses. Wearable cameras will measure what children are doing on screens, and where, when, and how long they are doing it. The project will also investigate how screen time impacts children’s development and how it is influenced by their environment. This evidence will benefit children by improving screen time guidelines, and help parents understand the impact of screen time on children’s development.
Read moreRead less