Causal Knowledge-Empowered Adaptive Federated Learning. Federated learning tools are a promising framework for collaborative machine learning (ML) that also maintain data privacy; however, their ability to model heterogeneous data remains a key challenge. This project aims to develop a new learning scheme for coordinated training of ML models that successfully bridges variable data distributions. The framework proposed will be the first globally that can use causal knowledge to 1) handle data he ....Causal Knowledge-Empowered Adaptive Federated Learning. Federated learning tools are a promising framework for collaborative machine learning (ML) that also maintain data privacy; however, their ability to model heterogeneous data remains a key challenge. This project aims to develop a new learning scheme for coordinated training of ML models that successfully bridges variable data distributions. The framework proposed will be the first globally that can use causal knowledge to 1) handle data heterogeneity across devices and 2) address the real-world challenges when only a subset of devices have labelled data. Expected outcomes and benefits include the theoretical underpinnings and algorithms of causality-based collaborative training of ML models while better preserving the users’ data privacy.Read moreRead less
Modelling Adversarial Noise for Trustworthy Data Analytics. Adversarial robustness is a core property of trustworthy machine learning. This project aims to equip machines with the ability to model adversarial noise for defending adversarial attacks. The project expects to produce the next great step for artificial intelligence – the potential to robustly explore and exploit deceptive data. Expected outcomes of this project include theoretical foundations for modelling adversarial noise and the n ....Modelling Adversarial Noise for Trustworthy Data Analytics. Adversarial robustness is a core property of trustworthy machine learning. This project aims to equip machines with the ability to model adversarial noise for defending adversarial attacks. The project expects to produce the next great step for artificial intelligence – the potential to robustly explore and exploit deceptive data. Expected outcomes of this project include theoretical foundations for modelling adversarial noise and the next generation of intelligent systems to accommodate data in a noisy and hostile environment. This should benefit science, society, and the economy nationally and internationally through the applications to trustworthily analyse their corresponding complex data. Read moreRead less