Making Meta-learning Generalised . This project aims to develop novel machine learning techniques, termed generalised meta-learning, to make machines better utilise past experience to solve new tasks with few data. It expects to reduce the undesirable dependence of current machine learning on labelled data and significantly expand its application scope. Expected outcomes of the project consist of new theoretical results on meta-learning and a set of innovative algorithms that can support the bui ....Making Meta-learning Generalised . This project aims to develop novel machine learning techniques, termed generalised meta-learning, to make machines better utilise past experience to solve new tasks with few data. It expects to reduce the undesirable dependence of current machine learning on labelled data and significantly expand its application scope. Expected outcomes of the project consist of new theoretical results on meta-learning and a set of innovative algorithms that can support the building of next generation of computer vision systems to work in open and dynamic environments. This should be able to produce solid benefits to the science, society, and economy of Australian via the application of these advanced intelligent systems.Read moreRead less
Small Scalable Natural Language Models using Explicit Memory. Deep neural networks have had spectacular success in natural language processing, seeing wide-spread deployment as part of automatic assistant devices in homes and cars, and across many valuable industries including finance, medicine and law. Fueling this success is the use of ever larger models, with exponentially increasing training resources, accompanying hardware and energy demands. This project aims to develop more compact models ....Small Scalable Natural Language Models using Explicit Memory. Deep neural networks have had spectacular success in natural language processing, seeing wide-spread deployment as part of automatic assistant devices in homes and cars, and across many valuable industries including finance, medicine and law. Fueling this success is the use of ever larger models, with exponentially increasing training resources, accompanying hardware and energy demands. This project aims to develop more compact models, based on the incorporation of an explicit searchable memory, which will dramatically reduce model size, hardware requirements and energy usage. This will make modern natural language processing more accessible, while also providing greater flexibility, allowing for more adaptable and portable technologies.Read moreRead less
3D Vision Geometric Optimisation in Deep Learning. This project aims to develop a methodology for integrating the algorithms of 3D Vision Geometry and Optimization into the framework of Machine Learning and demonstrate the wide applicability of the new methods on a variety of challenging fundamental problems in Computer Vision. These include 3D geometric scene understanding, and estimation and prediction of human 2D/3D pose and activity. Applications of this technology are to be found in Intell ....3D Vision Geometric Optimisation in Deep Learning. This project aims to develop a methodology for integrating the algorithms of 3D Vision Geometry and Optimization into the framework of Machine Learning and demonstrate the wide applicability of the new methods on a variety of challenging fundamental problems in Computer Vision. These include 3D geometric scene understanding, and estimation and prediction of human 2D/3D pose and activity. Applications of this technology are to be found in Intelligent Transportation, Environment Monitoring, and Augmented Reality, applicable in smart-city planning and medical applications such as computer-enhanced surgery. The goal is to build Australia's competitive advantage in the forefront of ICT research and technology innovation.Read moreRead less
Generative Visual Pre-training on Unlabelled Big Data. This project aims to develop a generative visual pre-training of large-scale deep neural networks on unlabelled big data. Developing pre-trained visual models that are accurate, robust, and efficient for downstream tasks is a keystone of modern computer vision, but it poses challenges and knowledge gaps to existing unsupervised representation learning. Expected outcomes include new theories and algorithms for unsupervised visual pre-training ....Generative Visual Pre-training on Unlabelled Big Data. This project aims to develop a generative visual pre-training of large-scale deep neural networks on unlabelled big data. Developing pre-trained visual models that are accurate, robust, and efficient for downstream tasks is a keystone of modern computer vision, but it poses challenges and knowledge gaps to existing unsupervised representation learning. Expected outcomes include new theories and algorithms for unsupervised visual pre-training, which are anticipated to deepen our understanding of visual representation and make it easier to build and deploy computer vision applications and services. Examples of benefits include modernising machines in manufacturing and farming with visual intelligence. Read moreRead less
Data Complexity and Uncertainty-Resilient Deep Variational Learning. Enterprise data present increasingly significant characteristics and complexities, such as multi-aspect, heterogeneous and hierarchical features and interactions, and evolving dependencies and multi-distributions. They continue to significantly challenge the state-of-the-art probabilistic and neural learning systems with limited to insufficient capabilities and capacity. This research aims to develop a theory of flexible deep v ....Data Complexity and Uncertainty-Resilient Deep Variational Learning. Enterprise data present increasingly significant characteristics and complexities, such as multi-aspect, heterogeneous and hierarchical features and interactions, and evolving dependencies and multi-distributions. They continue to significantly challenge the state-of-the-art probabilistic and neural learning systems with limited to insufficient capabilities and capacity. This research aims to develop a theory of flexible deep variational learning transforming new deep probabilistic models with flexible variational neural mechanisms for analytically explainable, complexity-resilient analytics of real-life data. The outcomes are expected to fill important knowledge gaps and lift critical innovation competencies in wide domains.Read moreRead less
Exploiting Geometries of Learning for Fast, Adaptive and Robust AI. This project aims to uniquely exploit geometric manifolds in deep learning to advance the frontier of Artificial Intelligence (AI) research and applications in cybersecurity and general cognitive tasks. It expects to develop new theories, algorithms, tools, and technologies for machine learning systems that are fast, adaptive, lifelong and robust, even with limited supervision. Expected outcomes will enhance Australia's capabili ....Exploiting Geometries of Learning for Fast, Adaptive and Robust AI. This project aims to uniquely exploit geometric manifolds in deep learning to advance the frontier of Artificial Intelligence (AI) research and applications in cybersecurity and general cognitive tasks. It expects to develop new theories, algorithms, tools, and technologies for machine learning systems that are fast, adaptive, lifelong and robust, even with limited supervision. Expected outcomes will enhance Australia's capability and competitiveness in AI, and deliver robust and trustworthy learning technology. The project should provide significant benefits not only in advancing scientific and translational knowledge but also in accelerating AI innovations, safeguarding cyberspace, and reducing the burden on defence expenses in Australia.Read moreRead less
Fairness in Natural Language Processing. Natural language processing (NLP) has achieved spectacular commercial successes in recent years, and has been deployed across an ever-increasing breadth of devices and application areas. At the same time, there has been stark evidence to indicate that naively-trained models amplify biases in training data, and perform inconsistently across text relating to different demographic groupings of individuals. This project aims to systematically quantify the ext ....Fairness in Natural Language Processing. Natural language processing (NLP) has achieved spectacular commercial successes in recent years, and has been deployed across an ever-increasing breadth of devices and application areas. At the same time, there has been stark evidence to indicate that naively-trained models amplify biases in training data, and perform inconsistently across text relating to different demographic groupings of individuals. This project aims to systematically quantify the extent of such biases, and develop models that are both more socially equitable, as well as less prone to expose private data in the learned representations. In doing so, it will make NLP more accessible to new populations of users, and remove socio-technological barriers to NLP uptake.Read moreRead less
Automated assessment of data quality in biological knowledge resources. This project aims to develop methods for identifying poor quality data in biological databases. Research in biomedicine is underpinned by massive databases of biological data. Data quality is largely managed through manual curation, but automated methods to assess quality are critically needed. This project expects to develop a suite of computational tools for assessing biological data quality, utilising an innovative approa ....Automated assessment of data quality in biological knowledge resources. This project aims to develop methods for identifying poor quality data in biological databases. Research in biomedicine is underpinned by massive databases of biological data. Data quality is largely managed through manual curation, but automated methods to assess quality are critically needed. This project expects to develop a suite of computational tools for assessing biological data quality, utilising an innovative approach based on network analysis of database record connectivity. These tools will enable quantifying data quality at scale. Researchers, evidence-based decision-makers in biomedicine, and the analytical or predictive tools that use this data will make more reliable inferences and decisions.Read moreRead less
Efficient spatial data management for enabling true ride-sharing. This data management project aims to examine ride-sharing as a model of a complex decision system that can be optimised to deliver better outcomes. Popular ride-sharing apps have quickly evolved into ride-sourcing services that are comparable to calling a taxi on a mobile phone. Such arrangements miss many of the key benefits of true ride-sharing for the society. The project will model incentives by helping people agree on points ....Efficient spatial data management for enabling true ride-sharing. This data management project aims to examine ride-sharing as a model of a complex decision system that can be optimised to deliver better outcomes. Popular ride-sharing apps have quickly evolved into ride-sourcing services that are comparable to calling a taxi on a mobile phone. Such arrangements miss many of the key benefits of true ride-sharing for the society. The project will model incentives by helping people agree on points of interest rather than directly seeking trips from others to set destinations. It also aims to introduce privacy-aware dynamic matching of sharers, and expand to transportation at large, to generate new shared transportation services. The expected outcome of this project is to elevate today's taxi-like ride-sharing services to true ride-sharing arrangements. This is expected to provide benefits such as reduced traffic and emissions, as well as addressing parking issues and other traffic problems.Read moreRead less
Personalised data analytics for the Internet of Me. This project aims to develop data mining methods for extracting comprehensive personalised knowledge, without breaching trust. The Internet of Things will lead to the Internet of Me. Billions of smart devices connected to the Internet record people’s lives. Companies wish to provide highly personalised services that engage their customers, while individuals wish to understand their health, lifestyle, education and personal performance. The chal ....Personalised data analytics for the Internet of Me. This project aims to develop data mining methods for extracting comprehensive personalised knowledge, without breaching trust. The Internet of Things will lead to the Internet of Me. Billions of smart devices connected to the Internet record people’s lives. Companies wish to provide highly personalised services that engage their customers, while individuals wish to understand their health, lifestyle, education and personal performance. The challenge is to analyse individuals’ personal data, and discover how they differentiate from and overlap with others’. This project expects to enable businesses to deepen customer satisfaction and individuals to better understand their personal place in a connected world.Read moreRead less