Temporal Graph Mining for Anomaly Detection. This project aims to develop new technologies to detect anomalous patterns from dynamic networked data. Anomalies in networked data are commonly seen but are often hidden within the complex interconnections of large-scale, heterogeneous, and dynamic data, rendering existing detection methods ineffective. This project expects to design novel temporal graph mining techniques to compress large-scale networks, unify heterogeneous information, and enable l ....Temporal Graph Mining for Anomaly Detection. This project aims to develop new technologies to detect anomalous patterns from dynamic networked data. Anomalies in networked data are commonly seen but are often hidden within the complex interconnections of large-scale, heterogeneous, and dynamic data, rendering existing detection methods ineffective. This project expects to design novel temporal graph mining techniques to compress large-scale networks, unify heterogeneous information, and enable label-efficient anomaly detection. The performance will be assessed in social and business networks, with significant benefits to governments and businesses in many critical applications, including cyberbullying detection, malicious account detection, and cyber-attack detection.Read moreRead less
Discovery Early Career Researcher Award - Grant ID: DE240100105
Funder
Australian Research Council
Funding Amount
$458,823.00
Summary
Towards Evolvable and Sustainable Multimodal Machine Learning. Machine learning is commonly limited to a single operational modality. To enable image, sound and language comprehension simultaneously would require machines to reuse knowledge and understand concepts from multimodal data. The project aims to build a sparse model and present a set of innovative algorithms to enhance model generalisation for addressing distributional and semantic shifts and minimise the computational and labelling co ....Towards Evolvable and Sustainable Multimodal Machine Learning. Machine learning is commonly limited to a single operational modality. To enable image, sound and language comprehension simultaneously would require machines to reuse knowledge and understand concepts from multimodal data. The project aims to build a sparse model and present a set of innovative algorithms to enhance model generalisation for addressing distributional and semantic shifts and minimise the computational and labelling costs for training multimodal systems. Its outcomes will enable evolvable learning of models to suit varying testing scenarios after deployment and whilst reducing energy consumption and carbon emission. The application of these techniques could benefit sectors such as E-commerce, agriculture and transport.Read moreRead less