Discovery Early Career Researcher Award - Grant ID: DE230100473
Funder
Australian Research Council
Funding Amount
$410,154.00
Summary
Effective integration of human and automated analyses for security testing. This DECRA project aims to significantly improve the performance of current state-of-the-art automated security testing approaches, enabling them to discover more security bugs in strict time constraints. The key innovation of the project is its novel way to embrace human element to leverage the ingenuity of the developers. This project will help companies improve the security and reliability of their products, thwarting ....Effective integration of human and automated analyses for security testing. This DECRA project aims to significantly improve the performance of current state-of-the-art automated security testing approaches, enabling them to discover more security bugs in strict time constraints. The key innovation of the project is its novel way to embrace human element to leverage the ingenuity of the developers. This project will help companies improve the security and reliability of their products, thwarting cyberattacks that cost Australian business $29 billion each year. The knowledge from this project will be transferred and integrated into higher education subjects to train the next generations of software developers, who are responsible to build security-critical systems that we all rely on now and in the future.Read moreRead less
Discovery Early Career Researcher Award - Grant ID: DE230100477
Funder
Australian Research Council
Funding Amount
$421,554.00
Summary
Advancing Human Perception: Countering Evolving Malicious Fake Visual Data. The aim of this project is to provide new effective and generalisable deepfake detection methods for automatically detecting maliciously manipulated visual data generated by misused artificial intelligence (AI) techniques. It will present innovative computer vision and image processing knowledge and techniques, enabling the developed methods to advance human perception in recognising fake data, enhance cybersecurity, and ....Advancing Human Perception: Countering Evolving Malicious Fake Visual Data. The aim of this project is to provide new effective and generalisable deepfake detection methods for automatically detecting maliciously manipulated visual data generated by misused artificial intelligence (AI) techniques. It will present innovative computer vision and image processing knowledge and techniques, enabling the developed methods to advance human perception in recognising fake data, enhance cybersecurity, and protect privacy in AI applications. The anticipated outcomes should provide significant benefits to a wide range of applications, such as providing timely alerts to the media, government organisations, and the industry about misleading fake visual data, and preventing financial crimes on synthetic identity fraud.Read moreRead less
Rigorous Privacy Compliance in Modern Application Ecosystems. Modern network applications such as mobile applications and browser extensions have become the primary gateways for consumers to access the Internet in today’s digital landscape. This project aims to address privacy issues in these ecosystems by developing a new privacy-compliance assessment framework. The framework will evaluate the current privacy practices of application ecosystems, enabling users and developers in Australia and wo ....Rigorous Privacy Compliance in Modern Application Ecosystems. Modern network applications such as mobile applications and browser extensions have become the primary gateways for consumers to access the Internet in today’s digital landscape. This project aims to address privacy issues in these ecosystems by developing a new privacy-compliance assessment framework. The framework will evaluate the current privacy practices of application ecosystems, enabling users and developers in Australia and worldwide to reliably identify potential privacy risks and issues on their applications. The intended outcomes should endow data controllers with the capability of evidencing their compliance of data protection legislations such as Australia Privacy Act 1988 and EU General Data Protection Regulation (GDPR).Read moreRead less
Discovery Early Career Researcher Award - Grant ID: DE230101058
Funder
Australian Research Council
Funding Amount
$437,254.00
Summary
Glass-box Deep Machine Perception for Trustworthy Artificial Intelligence. Explainability and Transparency are the key values for development and deployment of Artificial Intelligence (AI) in Australia’s AI Ethics Framework for industry and governments. This project aims to build new tools to make the central technology of AI - deep learning - transparent and explainable. Its expected outputs are novel theory-driven algorithms and unconventional foundational blocks for deep learning that will al ....Glass-box Deep Machine Perception for Trustworthy Artificial Intelligence. Explainability and Transparency are the key values for development and deployment of Artificial Intelligence (AI) in Australia’s AI Ethics Framework for industry and governments. This project aims to build new tools to make the central technology of AI - deep learning - transparent and explainable. Its expected outputs are novel theory-driven algorithms and unconventional foundational blocks for deep learning that will allow humans to clearly interpret the reasoning process of this technology, which is currently not possible. It is expected to significantly advance our knowledge in machine intelligence and perception. Due to their fundamental nature, the project outcomes are likely to benefit industry and scientific frontiers alike.Read moreRead less
Preventing Exfiltration of Sensitive Data by Malicious Insiders or Malwares. Data exfiltration is a serious threat as highlighted in recent leakage of sensitive data that resulted in huge economic losses as well as unprecedented breaches of national security. The aim of this project is to develop a comprehensive and robust solution for detection and prevention of sensitive data exfiltration attempts by malware and unauthorised human users. Expected outcomes include scalable monitoring methods an ....Preventing Exfiltration of Sensitive Data by Malicious Insiders or Malwares. Data exfiltration is a serious threat as highlighted in recent leakage of sensitive data that resulted in huge economic losses as well as unprecedented breaches of national security. The aim of this project is to develop a comprehensive and robust solution for detection and prevention of sensitive data exfiltration attempts by malware and unauthorised human users. Expected outcomes include scalable monitoring methods and efficient algorithms that will be able to prevent real-time exfiltration and identify previously undetected exfiltration of sensitive data. This should provide significant benefits to governments, defence networks as well as businesses and health sectors, as it will protect them from sophisticated cyber attacks.
Read moreRead less