Learning Software Security Analysers with Imperfect Data. This project aims to systematically investigate next-generation learning-based software security analysis to detect vulnerabilities in real-world large-scale software. The expected learning-based foundation will support the handling of imperfect data in order to provide a precise, scalable and adaptive security analysis of the critical software components, thus capturing important security vulnerabilities missed by existing approaches. Th ....Learning Software Security Analysers with Imperfect Data. This project aims to systematically investigate next-generation learning-based software security analysis to detect vulnerabilities in real-world large-scale software. The expected learning-based foundation will support the handling of imperfect data in order to provide a precise, scalable and adaptive security analysis of the critical software components, thus capturing important security vulnerabilities missed by existing approaches. The success of this project will further enhance the international competitiveness of Australian research in this important field and will benefit any Australian industry and business where software systems are deeply-rooted, such as transportation, smart homes, medical devices, defence and finance.Read moreRead less
Toward Human-guided Safe Reinforcement Learning in the Real World. This project aims to investigate human-guided safe reinforcement learning (RL). Safe RL is an important topic that could enable real applications of RL systems by addressing safety constraints. Existing safe RL assumes the availability of specified safety constraints in mathematical or logical forms. This project proposes to study learning safety objectives from information provided directly by humans or indirectly via language m ....Toward Human-guided Safe Reinforcement Learning in the Real World. This project aims to investigate human-guided safe reinforcement learning (RL). Safe RL is an important topic that could enable real applications of RL systems by addressing safety constraints. Existing safe RL assumes the availability of specified safety constraints in mathematical or logical forms. This project proposes to study learning safety objectives from information provided directly by humans or indirectly via language models, and human-guided continuous correction for safety improvements. The established theories and developed algorithms will advance frontier technologies in AI and contribute to a wide range of real applications of safe RL, such as robotics and autonomous driving, bringing enormous social and economic benefits. Read moreRead less
Situated Anomaly Detection in an Open Environment. This project aims to investigate situated anomaly detection in an open environment. Existing anomaly detection techniques follow the setting of conventional machine learning and discover anomalies from a set of collected data. In contrast, this project proposes to develop the next-generation of anomaly detection algorithms by learning from interactions with an open environment, which enables the discovery of new anomalies and the early detection ....Situated Anomaly Detection in an Open Environment. This project aims to investigate situated anomaly detection in an open environment. Existing anomaly detection techniques follow the setting of conventional machine learning and discover anomalies from a set of collected data. In contrast, this project proposes to develop the next-generation of anomaly detection algorithms by learning from interactions with an open environment, which enables the discovery of new anomalies and the early detection of anomalies. The established theories and developed algorithms will advance frontier technologies in machine intelligence. The success of the project will contribute to a wide range of real applications in cybersecurity, defence and finance, bringing massive social and economic benefits. Read moreRead less
Discovery Early Career Researcher Award - Grant ID: DE240100144
Funder
Australian Research Council
Funding Amount
$444,447.00
Summary
Universal Model Selection Criteria for Scientific Machine Learning. This project aims to develop provably reliable universal model selection criteria to facilitate trustworthy scientific machine learning. Combining stochastic methods with an innovative geometric approach to basic statistical principles, this project expects to characterise, combine, and refine the most successful heuristics for designing and training huge models, such as deep neural networks, into a cohesive theoretical framewor ....Universal Model Selection Criteria for Scientific Machine Learning. This project aims to develop provably reliable universal model selection criteria to facilitate trustworthy scientific machine learning. Combining stochastic methods with an innovative geometric approach to basic statistical principles, this project expects to characterise, combine, and refine the most successful heuristics for designing and training huge models, such as deep neural networks, into a cohesive theoretical framework. The expected outcomes include a general toolkit for assisting neural network design at the forefront of scientific applications. This should significantly improve the quality of scientific predictions by facilitating confident adoption of deep learning methods into the pantheon of trustworthy modeling techniques. Read moreRead less