Causal Knowledge-Empowered Adaptive Federated Learning. Federated learning tools are a promising framework for collaborative machine learning (ML) that also maintain data privacy; however, their ability to model heterogeneous data remains a key challenge. This project aims to develop a new learning scheme for coordinated training of ML models that successfully bridges variable data distributions. The framework proposed will be the first globally that can use causal knowledge to 1) handle data he ....Causal Knowledge-Empowered Adaptive Federated Learning. Federated learning tools are a promising framework for collaborative machine learning (ML) that also maintain data privacy; however, their ability to model heterogeneous data remains a key challenge. This project aims to develop a new learning scheme for coordinated training of ML models that successfully bridges variable data distributions. The framework proposed will be the first globally that can use causal knowledge to 1) handle data heterogeneity across devices and 2) address the real-world challenges when only a subset of devices have labelled data. Expected outcomes and benefits include the theoretical underpinnings and algorithms of causality-based collaborative training of ML models while better preserving the users’ data privacy.Read moreRead less
Quantum Generative Diffusion Models for Molecular Research. This project will devise quantum generative diffusion models to equip classical counterparts with the ability to harness quantum data that naturally arise in molecular research. Theoretical foundations for analysing fast sampling methods with the help of inductive bias regarding the input data and employed circuits will validate efficient quantum generative diffusion models that have training and sampling advantages over classical count ....Quantum Generative Diffusion Models for Molecular Research. This project will devise quantum generative diffusion models to equip classical counterparts with the ability to harness quantum data that naturally arise in molecular research. Theoretical foundations for analysing fast sampling methods with the help of inductive bias regarding the input data and employed circuits will validate efficient quantum generative diffusion models that have training and sampling advantages over classical counterparts. Outcomes include applications in molecular conformation generation, compound screening, and drug design. The innovative research will significantly benefit Australia’s science, industry and health, and will maintain Australia’s global leading role in quantum machine learning and molecular research.Read moreRead less
Exploiting Geometries of Learning for Fast, Adaptive and Robust AI. This project aims to uniquely exploit geometric manifolds in deep learning to advance the frontier of Artificial Intelligence (AI) research and applications in cybersecurity and general cognitive tasks. It expects to develop new theories, algorithms, tools, and technologies for machine learning systems that are fast, adaptive, lifelong and robust, even with limited supervision. Expected outcomes will enhance Australia's capabili ....Exploiting Geometries of Learning for Fast, Adaptive and Robust AI. This project aims to uniquely exploit geometric manifolds in deep learning to advance the frontier of Artificial Intelligence (AI) research and applications in cybersecurity and general cognitive tasks. It expects to develop new theories, algorithms, tools, and technologies for machine learning systems that are fast, adaptive, lifelong and robust, even with limited supervision. Expected outcomes will enhance Australia's capability and competitiveness in AI, and deliver robust and trustworthy learning technology. The project should provide significant benefits not only in advancing scientific and translational knowledge but also in accelerating AI innovations, safeguarding cyberspace, and reducing the burden on defence expenses in Australia.Read moreRead less
Generative Visual Pre-training on Unlabelled Big Data. This project aims to develop a generative visual pre-training of large-scale deep neural networks on unlabelled big data. Developing pre-trained visual models that are accurate, robust, and efficient for downstream tasks is a keystone of modern computer vision, but it poses challenges and knowledge gaps to existing unsupervised representation learning. Expected outcomes include new theories and algorithms for unsupervised visual pre-training ....Generative Visual Pre-training on Unlabelled Big Data. This project aims to develop a generative visual pre-training of large-scale deep neural networks on unlabelled big data. Developing pre-trained visual models that are accurate, robust, and efficient for downstream tasks is a keystone of modern computer vision, but it poses challenges and knowledge gaps to existing unsupervised representation learning. Expected outcomes include new theories and algorithms for unsupervised visual pre-training, which are anticipated to deepen our understanding of visual representation and make it easier to build and deploy computer vision applications and services. Examples of benefits include modernising machines in manufacturing and farming with visual intelligence. Read moreRead less
Toward Human-guided Safe Reinforcement Learning in the Real World. This project aims to investigate human-guided safe reinforcement learning (RL). Safe RL is an important topic that could enable real applications of RL systems by addressing safety constraints. Existing safe RL assumes the availability of specified safety constraints in mathematical or logical forms. This project proposes to study learning safety objectives from information provided directly by humans or indirectly via language m ....Toward Human-guided Safe Reinforcement Learning in the Real World. This project aims to investigate human-guided safe reinforcement learning (RL). Safe RL is an important topic that could enable real applications of RL systems by addressing safety constraints. Existing safe RL assumes the availability of specified safety constraints in mathematical or logical forms. This project proposes to study learning safety objectives from information provided directly by humans or indirectly via language models, and human-guided continuous correction for safety improvements. The established theories and developed algorithms will advance frontier technologies in AI and contribute to a wide range of real applications of safe RL, such as robotics and autonomous driving, bringing enormous social and economic benefits. Read moreRead less
Data Complexity and Uncertainty-Resilient Deep Variational Learning. Enterprise data present increasingly significant characteristics and complexities, such as multi-aspect, heterogeneous and hierarchical features and interactions, and evolving dependencies and multi-distributions. They continue to significantly challenge the state-of-the-art probabilistic and neural learning systems with limited to insufficient capabilities and capacity. This research aims to develop a theory of flexible deep v ....Data Complexity and Uncertainty-Resilient Deep Variational Learning. Enterprise data present increasingly significant characteristics and complexities, such as multi-aspect, heterogeneous and hierarchical features and interactions, and evolving dependencies and multi-distributions. They continue to significantly challenge the state-of-the-art probabilistic and neural learning systems with limited to insufficient capabilities and capacity. This research aims to develop a theory of flexible deep variational learning transforming new deep probabilistic models with flexible variational neural mechanisms for analytically explainable, complexity-resilient analytics of real-life data. The outcomes are expected to fill important knowledge gaps and lift critical innovation competencies in wide domains.Read moreRead less
Learning to Reason in Reinforcement Learning. Deep Reinforcement Learning (RL) uses deep neural networks to represent and learn optimal decision-making policies for intelligent agents in complex environments. However, most RL approaches require millions of episodes to converge to good policies, making it difficult for RL to be applied in real-world scenarios taking significant resources. This project aims to equip RL with capabilities such as counterfactual reasoning and outcome anticipation to ....Learning to Reason in Reinforcement Learning. Deep Reinforcement Learning (RL) uses deep neural networks to represent and learn optimal decision-making policies for intelligent agents in complex environments. However, most RL approaches require millions of episodes to converge to good policies, making it difficult for RL to be applied in real-world scenarios taking significant resources. This project aims to equip RL with capabilities such as counterfactual reasoning and outcome anticipation to significantly reduce the number of interactions required, improve generalisation, and provide the agent with the capability to consider the cause-effects. These improvements would narrow the gap between AI and human capabilities and broaden the adoption of RL in real-world applications.Read moreRead less