Robust Defences against Adversarial Machine Learning for UAV Systems. This project aims to investigate robust defences for Unmanned Aerial Vehicle (UAV) systems to protect them against adversarial Machine Learning (ML) attacks. This project expects to generate new knowledge in the area of cybersecurity using innovative approaches to safeguard UAV systems from attacks that exploit vulnerabilities in ML models. The expected outcomes of this project include improve techniques for understanding and ....Robust Defences against Adversarial Machine Learning for UAV Systems. This project aims to investigate robust defences for Unmanned Aerial Vehicle (UAV) systems to protect them against adversarial Machine Learning (ML) attacks. This project expects to generate new knowledge in the area of cybersecurity using innovative approaches to safeguard UAV systems from attacks that exploit vulnerabilities in ML models. The expected outcomes of this project include improve techniques for understanding and developing robust ML models and enhanced capacity to design secure UAV systems. This should provide significant benefits, such as improving the security of UAV technology and increasing the reliable use of UAVs for transport and logistics services to support urban and regional communities in Australia.Read moreRead less
Discovery Early Career Researcher Award - Grant ID: DE230100477
Funder
Australian Research Council
Funding Amount
$421,554.00
Summary
Advancing Human Perception: Countering Evolving Malicious Fake Visual Data. The aim of this project is to provide new effective and generalisable deepfake detection methods for automatically detecting maliciously manipulated visual data generated by misused artificial intelligence (AI) techniques. It will present innovative computer vision and image processing knowledge and techniques, enabling the developed methods to advance human perception in recognising fake data, enhance cybersecurity, and ....Advancing Human Perception: Countering Evolving Malicious Fake Visual Data. The aim of this project is to provide new effective and generalisable deepfake detection methods for automatically detecting maliciously manipulated visual data generated by misused artificial intelligence (AI) techniques. It will present innovative computer vision and image processing knowledge and techniques, enabling the developed methods to advance human perception in recognising fake data, enhance cybersecurity, and protect privacy in AI applications. The anticipated outcomes should provide significant benefits to a wide range of applications, such as providing timely alerts to the media, government organisations, and the industry about misleading fake visual data, and preventing financial crimes on synthetic identity fraud.Read moreRead less