Discovery Early Career Researcher Award - Grant ID: DE230101329
Funder
Australian Research Council
Funding Amount
$432,355.00
Summary
Trading Privacy, Bandwidth and Accuracy in Algorithmic Machine Learning. This project aims to investigate the trade-offs between privacy, communication costs and accuracy of results when learning from users' sensitive data. The project intends to design faster and more accurate algorithms for a wide range of machine learning tasks by developing a novel and widely-applicable algorithmic framework. Expected outcomes of this project include new theoretical tools to guide the design of data-driven d ....Trading Privacy, Bandwidth and Accuracy in Algorithmic Machine Learning. This project aims to investigate the trade-offs between privacy, communication costs and accuracy of results when learning from users' sensitive data. The project intends to design faster and more accurate algorithms for a wide range of machine learning tasks by developing a novel and widely-applicable algorithmic framework. Expected outcomes of this project include new theoretical tools to guide the design of data-driven decision systems and rigorously analyse their performance and privacy guarantees. Privacy of individuals' information in data analytics pipelines is a key societal concern. This project should lead to significant benefits by strengthening privacy in these pipelines while also improving accuracy and cost-efficiency.Read moreRead less
Attribution of Machine-generated Code for Accountability. Machine-generated (or neural) code is usually produced by AI tools to speed up software development. However, such codes have recently raised serious security and privacy concerns. This project aims to attribute these codes to their generative models for accountability purposes. In the process, a series of new techniques are developed to differentiate between the codes generated by different models. The outcomes include analysis of neural ....Attribution of Machine-generated Code for Accountability. Machine-generated (or neural) code is usually produced by AI tools to speed up software development. However, such codes have recently raised serious security and privacy concerns. This project aims to attribute these codes to their generative models for accountability purposes. In the process, a series of new techniques are developed to differentiate between the codes generated by different models. The outcomes include analysis of neural code fingerprints, classification of neural codes, and theories to verify the correctness of code attribution. These will provide significant benefits, ranging from copyright protection to privacy preservation. This project is timely since currently the software community is pervasively using neural codes.Read moreRead less