Discovery Early Career Researcher Award - Grant ID: DE240101089
Funder
Australian Research Council
Funding Amount
$436,847.00
Summary
Trustworthy Hypothesis Transfer Learning. It is urgent to develop a new hypothesis transfer learning scheme that can overcome potential risks when finetuning unreliable large-scale pre-trained models. This project aims to develop an advanced and reliable scheme of hypothesis transfer learning, called Trustworthy Hypothesis Transfer Learning (TrustHTL). A new theoretically guaranteed heterogeneous hypothesis transfer learning framework will be developed to handle heterogeneous situations; a metho ....Trustworthy Hypothesis Transfer Learning. It is urgent to develop a new hypothesis transfer learning scheme that can overcome potential risks when finetuning unreliable large-scale pre-trained models. This project aims to develop an advanced and reliable scheme of hypothesis transfer learning, called Trustworthy Hypothesis Transfer Learning (TrustHTL). A new theoretically guaranteed heterogeneous hypothesis transfer learning framework will be developed to handle heterogeneous situations; a methodology to disinherit risks of pre-trained models and a new fuzzy relation based distributional discrepancy in heterogeneous transfer learning scenarios. The outcomes should significantly improve the reliability of machine learning with benefits for safety learning in data analytics.Read moreRead less
Causal Knowledge-Empowered Adaptive Federated Learning. Federated learning tools are a promising framework for collaborative machine learning (ML) that also maintain data privacy; however, their ability to model heterogeneous data remains a key challenge. This project aims to develop a new learning scheme for coordinated training of ML models that successfully bridges variable data distributions. The framework proposed will be the first globally that can use causal knowledge to 1) handle data he ....Causal Knowledge-Empowered Adaptive Federated Learning. Federated learning tools are a promising framework for collaborative machine learning (ML) that also maintain data privacy; however, their ability to model heterogeneous data remains a key challenge. This project aims to develop a new learning scheme for coordinated training of ML models that successfully bridges variable data distributions. The framework proposed will be the first globally that can use causal knowledge to 1) handle data heterogeneity across devices and 2) address the real-world challenges when only a subset of devices have labelled data. Expected outcomes and benefits include the theoretical underpinnings and algorithms of causality-based collaborative training of ML models while better preserving the users’ data privacy.Read moreRead less