New Paradigms for Robust Fitting: Kernelisation and Polyhedral Search. Outliers inevitably exist in visual data due to imperfect data acquisition or preprocessing. To enable computer vision applications that can perform reliably, robust fitting algorithms are necessary to counter the biasing influence of outliers. However, current robust algorithms are unsatisfactory: they are unreliable (due to using randomisation) or too computationally costly (due to using exhaustive search). This project wil ....New Paradigms for Robust Fitting: Kernelisation and Polyhedral Search. Outliers inevitably exist in visual data due to imperfect data acquisition or preprocessing. To enable computer vision applications that can perform reliably, robust fitting algorithms are necessary to counter the biasing influence of outliers. However, current robust algorithms are unsatisfactory: they are unreliable (due to using randomisation) or too computationally costly (due to using exhaustive search). This project will develop new robust algorithms to mitigate these shortcomings. It will do so by investigating two new paradigms of kernelisation and polyhedral search, which offer unprecedented theoretical insights into the problem. The outcomes will contribute towards computer vision applications that are more practical and reliable.Read moreRead less
Continuously learning to see. The ultimate goal of computer vision is to make a machine able to understand the world through analysis of images or videos. The new machine learning techniques developed in this project will enable previously impossible methods of computer vision and help strengthen Australia's competitiveness in this important area.
Unifying Foundations for Intelligent Agents. This project aims to drive forward the development of rigorous foundations for intelligent agents. The agent framework, the expected utility principle, sequential decision theory, and the information-theoretic foundations of inductive reasoning and machine learning have already brought significant order into the previously heterogeneous scattered field of artificial intelligence. This project aims to investigate an information-theoretic approach towar ....Unifying Foundations for Intelligent Agents. This project aims to drive forward the development of rigorous foundations for intelligent agents. The agent framework, the expected utility principle, sequential decision theory, and the information-theoretic foundations of inductive reasoning and machine learning have already brought significant order into the previously heterogeneous scattered field of artificial intelligence. This project aims to investigate an information-theoretic approach towards a unifying foundation for intelligent agents, which has recently spawned impressive applications. The theory is expected to provide a gold standard and valuable guidance for researchers working on smart software.Read moreRead less
Democratisation of Deep Learning: Neural Architecture Search at Low Cost. The need to manually design Deep Learning-based Neural Networks (DNNs) limits their usage to AI experts and hinders the exploitation of their true potential more broadly, e.g., in farming, humanities. We aim to replace this tedious process through novel AI methods capable of generating DNNs that can perform significantly better and at a lower computational cost than manually designed DNNs. We further expand this idea to so ....Democratisation of Deep Learning: Neural Architecture Search at Low Cost. The need to manually design Deep Learning-based Neural Networks (DNNs) limits their usage to AI experts and hinders the exploitation of their true potential more broadly, e.g., in farming, humanities. We aim to replace this tedious process through novel AI methods capable of generating DNNs that can perform significantly better and at a lower computational cost than manually designed DNNs. We further expand this idea to solve complex real-world problems with both labelled and unlabelled data found in various applications including energy and climate change. The expected outcomes include the novel AI methods, highly trained AI researchers and a number of critical applications that will bring significant benefits to Australia and the world.Read moreRead less
Machine education for trusted multi-skilled evolutionary learners . Transforming data assets into organisational knowledge assets sits in the hands of a few, highly specialised, data scientists. The aim of this research is to design educational instruments to support non-experts to teach artificial intelligence (AI) systems in a similar way to educating human teachers to teach human learners. The significance of the project lies in affording the wider smart, but not necessarily AI expert, commun ....Machine education for trusted multi-skilled evolutionary learners . Transforming data assets into organisational knowledge assets sits in the hands of a few, highly specialised, data scientists. The aim of this research is to design educational instruments to support non-experts to teach artificial intelligence (AI) systems in a similar way to educating human teachers to teach human learners. The significance of the project lies in affording the wider smart, but not necessarily AI expert, community the ability to contribute to growing our knowledge-based society in a safe, transparent and trustworthy manner. Outcomes will include innovative instruments to teach machines, novel knowledge creation, trusted and transparent AI systems, and a new generation of human teachers specialised in educating AI systems.Read moreRead less
Feature reinforcement learning. Agent applications include speech recognition systems, vision systems, search engines, auto-pilots, spam filters, and robots. The research outputs from this project will enable agents to adapt to their environment and automatically, during deployment, acquire much of the knowledge that is currently required to be built in by agent designers.
Target-agnostic analytics: building agile predictive models for big data. This project aims to develop target-agnostic analytics, creating models of data that can be queried about any variable or feature without having to be relearned. Government and business collect vast quantities of data, but these are wasted if we cannot use them to predict the future from the past. Presently, big-data analytics is effective at predicting a single pre-defined target variable, yet in many applications, what w ....Target-agnostic analytics: building agile predictive models for big data. This project aims to develop target-agnostic analytics, creating models of data that can be queried about any variable or feature without having to be relearned. Government and business collect vast quantities of data, but these are wasted if we cannot use them to predict the future from the past. Presently, big-data analytics is effective at predicting a single pre-defined target variable, yet in many applications, what we know about a system and what we want to find out are far more complex. This project expects to yield novel target-agnostic technologies with associated publications and open-source software. The project will expand the capabilities of machine learning, providing better use of the massive data assets collected across most public, commercial and industry sectors.Read moreRead less
Memetic algorithms for multiobjective optimisation problems in bioinformatics. Many questions of paramount importance in life sciences can be formulated as optimisation problems but using just a single criterion can be misleading. This project will address this problem using multiobjective optimisation and leveraging Australia's investment in supercomputing with algorithms that mimic evolutionary processes in silico.
Improved syntactic and semantic analysis for natural language processing. This project aims to improve the accuracy of syntactic and semantic analysis of natural language for automatic extraction of meaning from text. Many data mining and information extraction applications rely on syntactic and semantic analysis. Current analysis approaches are limited because they require expensive manually-labelled data. The project plans to develop new indirectly-supervised approaches to overcome this labell ....Improved syntactic and semantic analysis for natural language processing. This project aims to improve the accuracy of syntactic and semantic analysis of natural language for automatic extraction of meaning from text. Many data mining and information extraction applications rely on syntactic and semantic analysis. Current analysis approaches are limited because they require expensive manually-labelled data. The project plans to develop new indirectly-supervised approaches to overcome this labelled data bottleneck. By integrating information from large text corpora and structured databases, the project aims to minimise the reliance on manually-labelled data for training natural language processing systems. Automatic methods for syntactic and semantic analysis would have a wide range of applications in extracting information from large collections of unstructured data, such as hospital patient records or social media.Read moreRead less
Stay well: Analysing lifestyle data from smart monitoring devices. Pervasive health monitoring devices provide a rich data source with opportunity to continuously extract patterns and guide individuals towards their goals of wellbeing. To exploit this nexus between machine learning and pervasive computing, this project aims to solve the computational problems to analyse data from such wearable devices, applying rigorous statistical models to discover latent patterns and groupings. The significan ....Stay well: Analysing lifestyle data from smart monitoring devices. Pervasive health monitoring devices provide a rich data source with opportunity to continuously extract patterns and guide individuals towards their goals of wellbeing. To exploit this nexus between machine learning and pervasive computing, this project aims to solve the computational problems to analyse data from such wearable devices, applying rigorous statistical models to discover latent patterns and groupings. The significance lies in solving fundamental problems related to heterogeneous, multi-level, mixed-type time series data. The proposed outcomes are expected to enable monitoring of people 'in the wild', away from doctors and hospitals, thus significantly reducing the burgeoning cost of hospital visits and stays.Read moreRead less