Exploiting Geometries of Learning for Fast, Adaptive and Robust AI. This project aims to uniquely exploit geometric manifolds in deep learning to advance the frontier of Artificial Intelligence (AI) research and applications in cybersecurity and general cognitive tasks. It expects to develop new theories, algorithms, tools, and technologies for machine learning systems that are fast, adaptive, lifelong and robust, even with limited supervision. Expected outcomes will enhance Australia's capabili ....Exploiting Geometries of Learning for Fast, Adaptive and Robust AI. This project aims to uniquely exploit geometric manifolds in deep learning to advance the frontier of Artificial Intelligence (AI) research and applications in cybersecurity and general cognitive tasks. It expects to develop new theories, algorithms, tools, and technologies for machine learning systems that are fast, adaptive, lifelong and robust, even with limited supervision. Expected outcomes will enhance Australia's capability and competitiveness in AI, and deliver robust and trustworthy learning technology. The project should provide significant benefits not only in advancing scientific and translational knowledge but also in accelerating AI innovations, safeguarding cyberspace, and reducing the burden on defence expenses in Australia.Read moreRead less
Stochastic Construction of Error Correcting Codes with Application to Digital Communications. Modern society would be unrecognisable without error correcting codes; mobile telephones, storage devices such as DVD's and high speed data communications simply would not exist. Yet most theoretical results on error correcting codes are asymptotic in nature and ignore computational complexity issues, that is, they are not representative of many real life situations. By building on recent breakthrough ....Stochastic Construction of Error Correcting Codes with Application to Digital Communications. Modern society would be unrecognisable without error correcting codes; mobile telephones, storage devices such as DVD's and high speed data communications simply would not exist. Yet most theoretical results on error correcting codes are asymptotic in nature and ignore computational complexity issues, that is, they are not representative of many real life situations. By building on recent breakthroughs in statistics and stochastic optimisation, this project will develop algorithms for designing optimised error correcting codes subject to realistic finite data length and computational complexity constraints. Successful outcomes will lead to enhanced data communications and storage, greatly benefiting industry and consumers alike.
Read moreRead less
Causal Knowledge-Empowered Adaptive Federated Learning. Federated learning tools are a promising framework for collaborative machine learning (ML) that also maintain data privacy; however, their ability to model heterogeneous data remains a key challenge. This project aims to develop a new learning scheme for coordinated training of ML models that successfully bridges variable data distributions. The framework proposed will be the first globally that can use causal knowledge to 1) handle data he ....Causal Knowledge-Empowered Adaptive Federated Learning. Federated learning tools are a promising framework for collaborative machine learning (ML) that also maintain data privacy; however, their ability to model heterogeneous data remains a key challenge. This project aims to develop a new learning scheme for coordinated training of ML models that successfully bridges variable data distributions. The framework proposed will be the first globally that can use causal knowledge to 1) handle data heterogeneity across devices and 2) address the real-world challenges when only a subset of devices have labelled data. Expected outcomes and benefits include the theoretical underpinnings and algorithms of causality-based collaborative training of ML models while better preserving the users’ data privacy.Read moreRead less
A Generic Framework for Verifying Machine Learning Algorithms. This project aims to discover new ways to verify whether decisions made by Artificial Intelligence and Machine Learning algorithms are as per the specifications set by their designers and/or regulatory bodies. The project also provides new methods to align algorithm decisions when they are found to be non-abiding. The outcomes will include new machine learning theories and frameworks for algorithmic assurance. The significance of the ....A Generic Framework for Verifying Machine Learning Algorithms. This project aims to discover new ways to verify whether decisions made by Artificial Intelligence and Machine Learning algorithms are as per the specifications set by their designers and/or regulatory bodies. The project also provides new methods to align algorithm decisions when they are found to be non-abiding. The outcomes will include new machine learning theories and frameworks for algorithmic assurance. The significance of the project is that it will offer a crucial platform for certifying algorithms and thus benefit society and businesses in deciding the right Artificial Intelligence algorithms. Read moreRead less
The dog that didn't bark: a Bayesian account of reasoning from censored data. This project aims to develop and test a new computational theory of inductive reasoning. Inductive reasoning involves extending knowledge from known to novel instances, and is a central component of intelligent behaviour. This project will address the cognitive mechanisms that allow people to draw inferences based on both observed and censored evidence. The project intends to test the model through an extensive program ....The dog that didn't bark: a Bayesian account of reasoning from censored data. This project aims to develop and test a new computational theory of inductive reasoning. Inductive reasoning involves extending knowledge from known to novel instances, and is a central component of intelligent behaviour. This project will address the cognitive mechanisms that allow people to draw inferences based on both observed and censored evidence. The project intends to test the model through an extensive program of experimental investigation and computational modelling. The anticipated benefits include an enhanced understanding of human inference, especially in domains such as the evaluation of forensic or financial evidence, where data censoring is common.Read moreRead less
Where do inductive biases come from? A Bayesian investigation. This project aims to investigate the origin of our thinking and learning biases using state-of-the-art mathematical models and sophisticated experimental designs. Expected outcomes include bridging the gap between human and machine learning by pairing mathematical modelling with experimental work, forming a necessary step toward the development of machine systems that can reason like people do. This will provide significant benefits ....Where do inductive biases come from? A Bayesian investigation. This project aims to investigate the origin of our thinking and learning biases using state-of-the-art mathematical models and sophisticated experimental designs. Expected outcomes include bridging the gap between human and machine learning by pairing mathematical modelling with experimental work, forming a necessary step toward the development of machine systems that can reason like people do. This will provide significant benefits such as understanding how people operate so effectively in real environments, when even the most powerful computers struggle to handle the complexities of everyday learning problems.Read moreRead less
Expanding the Foundation of Planetary Science. Our understanding of the Solar System is based on a foundation of meteorite analyses. Knowing their orbital origin provides a critical spatial context, but we have this data for <0.1% of samples. This project aims to address this issue. There are 66 meteorite falls across Australia with orbits determined by the Desert Fireball Network that await recovery - more than the current global dataset. This project expects to generate new knowledge by applyi ....Expanding the Foundation of Planetary Science. Our understanding of the Solar System is based on a foundation of meteorite analyses. Knowing their orbital origin provides a critical spatial context, but we have this data for <0.1% of samples. This project aims to address this issue. There are 66 meteorite falls across Australia with orbits determined by the Desert Fireball Network that await recovery - more than the current global dataset. This project expects to generate new knowledge by applying an innovative search methodology using drones and machine learning. Expected outcomes include dramatically increasing the number of orbital meteorites. This should provide significant benefits. By linking meteorites to their parent asteroids every rock becomes a small sample-return mission.Read moreRead less
Learning from others: Inductive reasoning based on human-generated data. Most of the data we see every day, from politics to gossip, comes from other people. Making inferences about such data is difficult because the people who provided it may have biases or limitations in their knowledge that we do not know about and must figure out. This project uses a series of experiments tied to normative computational models of social reasoning to explore how people solve this problem. This work has the po ....Learning from others: Inductive reasoning based on human-generated data. Most of the data we see every day, from politics to gossip, comes from other people. Making inferences about such data is difficult because the people who provided it may have biases or limitations in their knowledge that we do not know about and must figure out. This project uses a series of experiments tied to normative computational models of social reasoning to explore how people solve this problem. This work has the potential to make a major impact in understanding how information is understood and shared, especially when it is about topics that people lack firsthand knowledge about, like climate change. The computational models also have applications to the development of expert systems upon which our information economy relies.Read moreRead less
Nonparametric Machine Learning for Modern Data Analytics. This project intends to develop next-generation machine-learning methods to cope with the growing data deluge. Modern data analytics tasks need to interpret and derive values from complex, growing data. Intended outcomes of the project include new Bayesian nonparametric methods that can express arbitrary dependency amongst multiple, heterogeneous data sources with infinite model complexity, together with algorithms to perform inference an ....Nonparametric Machine Learning for Modern Data Analytics. This project intends to develop next-generation machine-learning methods to cope with the growing data deluge. Modern data analytics tasks need to interpret and derive values from complex, growing data. Intended outcomes of the project include new Bayesian nonparametric methods that can express arbitrary dependency amongst multiple, heterogeneous data sources with infinite model complexity, together with algorithms to perform inference and deduce knowledge from them; new Bayesian statistical inference for set-valued random variables that moves beyond vectors and matrices to enrich our analytics toolbox to deal with sets; and a new deterministic fast inference to meet with real-world demand.Read moreRead less
Making human place knowledge digestible by computers. This project aims to develop the tools that will enable people to interact intuitively with computers about places and the relations between places. People understand their environment in a different way to computers; they think of places and their relations, while computers use coordinates and maps. People’s interaction with maps is cognitively costly and error-prone, which is becoming untenable in situations needing time-critical decision m ....Making human place knowledge digestible by computers. This project aims to develop the tools that will enable people to interact intuitively with computers about places and the relations between places. People understand their environment in a different way to computers; they think of places and their relations, while computers use coordinates and maps. People’s interaction with maps is cognitively costly and error-prone, which is becoming untenable in situations needing time-critical decision making. The project will revolutionise the design of information services where computers deal with humans and location in time-critical or stressful situations, including emergency calls, disaster response and local search queries. The uptake of this design by industry will lead to economic benefits as well as a safer society living in a smarter environment.Read moreRead less