Causal Knowledge-Empowered Adaptive Federated Learning. Federated learning tools are a promising framework for collaborative machine learning (ML) that also maintain data privacy; however, their ability to model heterogeneous data remains a key challenge. This project aims to develop a new learning scheme for coordinated training of ML models that successfully bridges variable data distributions. The framework proposed will be the first globally that can use causal knowledge to 1) handle data he ....Causal Knowledge-Empowered Adaptive Federated Learning. Federated learning tools are a promising framework for collaborative machine learning (ML) that also maintain data privacy; however, their ability to model heterogeneous data remains a key challenge. This project aims to develop a new learning scheme for coordinated training of ML models that successfully bridges variable data distributions. The framework proposed will be the first globally that can use causal knowledge to 1) handle data heterogeneity across devices and 2) address the real-world challenges when only a subset of devices have labelled data. Expected outcomes and benefits include the theoretical underpinnings and algorithms of causality-based collaborative training of ML models while better preserving the users’ data privacy.Read moreRead less
Discovery Early Career Researcher Award - Grant ID: DE230100495
Funder
Australian Research Council
Funding Amount
$422,154.00
Summary
Structured Federated Learning for Personalised Intelligence on Devices. The project aims to develop a new structured federated machine-learning framework to enhance the customisation of artificial intelligence across mobile and smart devices. It seeks to enable users to receive customised services on their devices without sending their sensitive personal data to a cloud service provider. Anticipated benefits include greater privacy, data security and device performance, as well as better end-use ....Structured Federated Learning for Personalised Intelligence on Devices. The project aims to develop a new structured federated machine-learning framework to enhance the customisation of artificial intelligence across mobile and smart devices. It seeks to enable users to receive customised services on their devices without sending their sensitive personal data to a cloud service provider. Anticipated benefits include greater privacy, data security and device performance, as well as better end-user experience. Expected outcomes of this research include new knowledge, toolkits and algorithms for use in developing machine-learning based secure, efficient and fault-tolerant technologies for software applications, mobile services, cloud computing, autonomous vehicles and advanced manufacturing processes.Read moreRead less
Deep Adder Networks on Edge Devices. This project aims to empower edge devices with intelligence by developing advanced deep neural networks that address the conflict between the high resource requirements of deep learning and the generally inadequate performance of the edge. Multiplication has been the dominant type of operation in deep learning, though the addition is known to be much cheaper. This project expects to yield theories and algorithms that allow deep neural networks consisting of n ....Deep Adder Networks on Edge Devices. This project aims to empower edge devices with intelligence by developing advanced deep neural networks that address the conflict between the high resource requirements of deep learning and the generally inadequate performance of the edge. Multiplication has been the dominant type of operation in deep learning, though the addition is known to be much cheaper. This project expects to yield theories and algorithms that allow deep neural networks consisting of nearly pure additions to fulfil the requisites of accuracy, robustness, calibration and generalisation in real-world computer vision tasks. The success of this project will benefit deep learning-based products on smartphones or robots in health and cybersecurity.Read moreRead less
Modelling Adversarial Noise for Trustworthy Data Analytics. Adversarial robustness is a core property of trustworthy machine learning. This project aims to equip machines with the ability to model adversarial noise for defending adversarial attacks. The project expects to produce the next great step for artificial intelligence – the potential to robustly explore and exploit deceptive data. Expected outcomes of this project include theoretical foundations for modelling adversarial noise and the n ....Modelling Adversarial Noise for Trustworthy Data Analytics. Adversarial robustness is a core property of trustworthy machine learning. This project aims to equip machines with the ability to model adversarial noise for defending adversarial attacks. The project expects to produce the next great step for artificial intelligence – the potential to robustly explore and exploit deceptive data. Expected outcomes of this project include theoretical foundations for modelling adversarial noise and the next generation of intelligent systems to accommodate data in a noisy and hostile environment. This should benefit science, society, and the economy nationally and internationally through the applications to trustworthily analyse their corresponding complex data. Read moreRead less
Discovery Early Career Researcher Award - Grant ID: DE230101591
Funder
Australian Research Council
Funding Amount
$419,154.00
Summary
Towards Real-world Continual Learning on Unrestricted Task Steams. This project aims to enable machines to continually learn without forgetting and accumulate knowledge from the sequential data streams containing diverse tasks. This project expects to advance the continual learning to unrestricted real-world task steams that are long-term and complex and promote artificial intelligence toward the human-level intelligence that can automatically evolve during interaction with the world. Expected o ....Towards Real-world Continual Learning on Unrestricted Task Steams. This project aims to enable machines to continually learn without forgetting and accumulate knowledge from the sequential data streams containing diverse tasks. This project expects to advance the continual learning to unrestricted real-world task steams that are long-term and complex and promote artificial intelligence toward the human-level intelligence that can automatically evolve during interaction with the world. Expected outcomes of this project include the paradigm-shifting continual learning framework and techniques for handling unrestricted task steams in real-world scenarios. They will benefit society and the economy nationally and internationally by enhancing the applicability of artificial intelligence.Read moreRead less
Generative Visual Pre-training on Unlabelled Big Data. This project aims to develop a generative visual pre-training of large-scale deep neural networks on unlabelled big data. Developing pre-trained visual models that are accurate, robust, and efficient for downstream tasks is a keystone of modern computer vision, but it poses challenges and knowledge gaps to existing unsupervised representation learning. Expected outcomes include new theories and algorithms for unsupervised visual pre-training ....Generative Visual Pre-training on Unlabelled Big Data. This project aims to develop a generative visual pre-training of large-scale deep neural networks on unlabelled big data. Developing pre-trained visual models that are accurate, robust, and efficient for downstream tasks is a keystone of modern computer vision, but it poses challenges and knowledge gaps to existing unsupervised representation learning. Expected outcomes include new theories and algorithms for unsupervised visual pre-training, which are anticipated to deepen our understanding of visual representation and make it easier to build and deploy computer vision applications and services. Examples of benefits include modernising machines in manufacturing and farming with visual intelligence. Read moreRead less
Toward Human-guided Safe Reinforcement Learning in the Real World. This project aims to investigate human-guided safe reinforcement learning (RL). Safe RL is an important topic that could enable real applications of RL systems by addressing safety constraints. Existing safe RL assumes the availability of specified safety constraints in mathematical or logical forms. This project proposes to study learning safety objectives from information provided directly by humans or indirectly via language m ....Toward Human-guided Safe Reinforcement Learning in the Real World. This project aims to investigate human-guided safe reinforcement learning (RL). Safe RL is an important topic that could enable real applications of RL systems by addressing safety constraints. Existing safe RL assumes the availability of specified safety constraints in mathematical or logical forms. This project proposes to study learning safety objectives from information provided directly by humans or indirectly via language models, and human-guided continuous correction for safety improvements. The established theories and developed algorithms will advance frontier technologies in AI and contribute to a wide range of real applications of safe RL, such as robotics and autonomous driving, bringing enormous social and economic benefits. Read moreRead less
Learning to Reason in Reinforcement Learning. Deep Reinforcement Learning (RL) uses deep neural networks to represent and learn optimal decision-making policies for intelligent agents in complex environments. However, most RL approaches require millions of episodes to converge to good policies, making it difficult for RL to be applied in real-world scenarios taking significant resources. This project aims to equip RL with capabilities such as counterfactual reasoning and outcome anticipation to ....Learning to Reason in Reinforcement Learning. Deep Reinforcement Learning (RL) uses deep neural networks to represent and learn optimal decision-making policies for intelligent agents in complex environments. However, most RL approaches require millions of episodes to converge to good policies, making it difficult for RL to be applied in real-world scenarios taking significant resources. This project aims to equip RL with capabilities such as counterfactual reasoning and outcome anticipation to significantly reduce the number of interactions required, improve generalisation, and provide the agent with the capability to consider the cause-effects. These improvements would narrow the gap between AI and human capabilities and broaden the adoption of RL in real-world applications.Read moreRead less