Discovery Early Career Researcher Award - Grant ID: DE170100361
Funder
Australian Research Council
Funding Amount
$360,000.00
Summary
Towards reliable and robust machine learning systems. This project aims to protect machine learning systems from adversarial manipulation. Machine learning technologies are used in e-commerce, search, virtual assistants and self-driving cars. However, they are vulnerable to adversarial manipulations which are imperceptible to humans but can cause systems to fail, thereby undermining their usefulness or possibly causing disasters. Less vulnerable machine learning systems are expected to make futu ....Towards reliable and robust machine learning systems. This project aims to protect machine learning systems from adversarial manipulation. Machine learning technologies are used in e-commerce, search, virtual assistants and self-driving cars. However, they are vulnerable to adversarial manipulations which are imperceptible to humans but can cause systems to fail, thereby undermining their usefulness or possibly causing disasters. Less vulnerable machine learning systems are expected to make future autonomous systems, such as self-driving cars and autonomous robots, safer. This project will provide a deeper understanding of how machine learning systems can be made less vulnerable, thereby increasing the safety of future autonomous systems such as self-driving cars and autonomous robots.Read moreRead less
Discovery Early Career Researcher Award - Grant ID: DE190100046
Funder
Australian Research Council
Funding Amount
$387,000.00
Summary
Fortifying our digital economy: advanced automated vulnerability discovery. This project aims to enable security researchers to detect critical vulnerabilities in large software systems with maximal efficiency, cost-effectively, and with known statistical accuracy. The aim is to develop advanced high-performance fuzzers that effectively thwart malware attacks, ransomware epidemics, and cyber terrorism by exposing security flaws before they can commence. The project will employ a well-established ....Fortifying our digital economy: advanced automated vulnerability discovery. This project aims to enable security researchers to detect critical vulnerabilities in large software systems with maximal efficiency, cost-effectively, and with known statistical accuracy. The aim is to develop advanced high-performance fuzzers that effectively thwart malware attacks, ransomware epidemics, and cyber terrorism by exposing security flaws before they can commence. The project will employ a well-established statistical framework utilised in ecology research to provide fundamental insights to boosting the efficiency of software vulnerability discovery, and on the trade-off between investing more resources and gaining better cyber security guarantees. As our reliance on new technologies is ever growing, this project equips Australia to curb cyber crime cost-effectively.Read moreRead less
View-based processing of pattern matching queries in large graphs. Graph data exist ubiquitously in modern information systems. Graph pattern matching (GPM) finds parts of the data graph that match a given pattern. It has applications in many areas including knowledge discovery, public health, and crime detection. This project will develop novel techniques for the efficient processing of GPM queries in large graphs.
Discovery Early Career Researcher Award - Grant ID: DE160100584
Funder
Australian Research Council
Funding Amount
$370,000.00
Summary
Secure and Private Machine Learning. This project intends to answer the question: How can machines learn from data when participants behave maliciously for personal gain? Machine learning and statistics are used in many technologies where participants have an incentive to game the system (eg internet ad placement, e-commerce rating systems, credit risk in finance, health analytics and smart utility grids). However, little is known about how well state-of-the-art statistical inference techniques ....Secure and Private Machine Learning. This project intends to answer the question: How can machines learn from data when participants behave maliciously for personal gain? Machine learning and statistics are used in many technologies where participants have an incentive to game the system (eg internet ad placement, e-commerce rating systems, credit risk in finance, health analytics and smart utility grids). However, little is known about how well state-of-the-art statistical inference techniques fare when data is manipulated by a malicious participant. The project's outcomes aim to ensure that statistical analysis is accurate while preserving data privacy, providing theoretical foundations of secure machine learning in adversarial domains. Potential applications range from cybersecurity defences to measures for balancing security and privacy interests.Read moreRead less
Solid-state quantum communication technology. This project will develop the quantum information devices required to create a quantum communication network for the ultra-secure transmission of data. The key technological challenge is to entangle the quantum state of two crystals separated by kilometres, and maintain this entanglement for many seconds.
Machine learning in adversarial environments. Machine learning underpins the technologies driving the economies of both Silicon Valley and Wall Street, from web search and ad placement, to stock predictions and efforts in fighting cybercrime. This project aims to answer the question: How can machines learn from data when contributors act maliciously for personal gain?