Learning Robotic Navigation and Interaction from Object-based Semantic Maps. Our project aims to develop new learning algorithms that enable robots to perform high-complexity tasks that are currently impossible. Compared to existing methods that rely on low-level sensor data, we aim to achieve this by learning from a high-level graph representation of the environment that captures semantics, affordances, and geometry. The outcome would be robots capable of using human instructions to efficiently ....Learning Robotic Navigation and Interaction from Object-based Semantic Maps. Our project aims to develop new learning algorithms that enable robots to perform high-complexity tasks that are currently impossible. Compared to existing methods that rely on low-level sensor data, we aim to achieve this by learning from a high-level graph representation of the environment that captures semantics, affordances, and geometry. The outcome would be robots capable of using human instructions to efficiently learn complex interaction and navigation behaviours that transfer to unseen environments. Our research should benefit new applications in domains of economic and societal importance that are currently too complex, unsafe, and uncertain for robot assistants, such as aged care, advanced manufacturing and domestic robotics.Read moreRead less
Unlocking Mass Mobile Video Analytics with Advanced Neural Memory Networks. This project will develop neural memory architectures and dense spatial-temporal bundle adjustment to predict movement, behaviour, and perform multi-sensor fusion across large asynchronous video feeds. This capability will allow us to better interrogate and analyse mass video information recorded from the vast number of smartphones, action cameras, and surveillance cameras which exist at public events of interest. Outcom ....Unlocking Mass Mobile Video Analytics with Advanced Neural Memory Networks. This project will develop neural memory architectures and dense spatial-temporal bundle adjustment to predict movement, behaviour, and perform multi-sensor fusion across large asynchronous video feeds. This capability will allow us to better interrogate and analyse mass video information recorded from the vast number of smartphones, action cameras, and surveillance cameras which exist at public events of interest. Outcomes include the ability to ingest multiple video feeds into a dense and dynamic 3D reconstruction for knowledge representation and discovery, and analysis of events and behaviour through new spatio-temporal analytic approaches. This will offer significant benefits for video forensic analysis, policing, and emergency response.Read moreRead less
Square Eyes or All Lies? Understanding Children's Exposure to Screens. This project will examine Australian parents’ number one concern about their children’s health and behaviour – their interactions with electronic screens. Current screen time guidelines are based on low-quality evidence and lack the nuance required to address this complex issue. This project will use innovative technology to resolve these weaknesses. Wearable cameras will measure what children are doing on screens, and where, ....Square Eyes or All Lies? Understanding Children's Exposure to Screens. This project will examine Australian parents’ number one concern about their children’s health and behaviour – their interactions with electronic screens. Current screen time guidelines are based on low-quality evidence and lack the nuance required to address this complex issue. This project will use innovative technology to resolve these weaknesses. Wearable cameras will measure what children are doing on screens, and where, when, and how long they are doing it. The project will also investigate how screen time impacts children’s development and how it is influenced by their environment. This evidence will benefit children by improving screen time guidelines, and help parents understand the impact of screen time on children’s development.
Read moreRead less
Two-way Auslan: Automatic Machine Translation of Australian Sign Language. This project aims to develop an automatic two-way machine-translation system between Auslan (Australian Sign Language) and English by researching and leveraging advanced computer vision and machine learning technology. The project expects to advance research in AI technology on topics including visual recognition, language processing and deep learning. This will boost Australia's national research capacity and global com ....Two-way Auslan: Automatic Machine Translation of Australian Sign Language. This project aims to develop an automatic two-way machine-translation system between Auslan (Australian Sign Language) and English by researching and leveraging advanced computer vision and machine learning technology. The project expects to advance research in AI technology on topics including visual recognition, language processing and deep learning. This will boost Australia's national research capacity and global competitiveness. Expected outcomes of this project will help to break the communication barriers between the Deaf and hearing population. This should provide significant benefits to Deaf communities through enhanced communication and improved quality-of-life, leading to a fair, more inclusive and resilient Australian society.Read moreRead less
A Novel Automatic Neural Network Feature Extractor. This project aims to study feature extraction abilities of convolutional as well as traditional neural networks and develop a generic feature extractor which can be applied to wide variety of real-world image and non-image data. New concepts for automatic feature extraction, feature explanation, hybrid evolutionary algorithms and non-iterative ensemble learning will be introduced and evaluated. The expected outcomes are a generic feature extrac ....A Novel Automatic Neural Network Feature Extractor. This project aims to study feature extraction abilities of convolutional as well as traditional neural networks and develop a generic feature extractor which can be applied to wide variety of real-world image and non-image data. New concepts for automatic feature extraction, feature explanation, hybrid evolutionary algorithms and non-iterative ensemble learning will be introduced and evaluated. The expected outcomes are a generic feature extractor for automatically extracting features, an optimiser for finding optimal parameters and non-iterative ensemble learning technique for classification of features into classes. The impact of this project will be automatic feature extractors and classifiers for real-world applications.Read moreRead less