Australian Laureate Fellowships - Grant ID: FL150100150
Funder
Australian Research Council
Funding Amount
$2,413,112.00
Summary
Bayesian learning for decision making in the big data era. Bayesian learning for decision making in the big data era: This fellowship project aims to develop new techniques in evidence-based learning and decision-making in the big data era. Big data has arrived, and with it a huge global demand for statistical knowledge and skills to analyse these data for improved learning and decision-making. This project will seek to address this need by creating a step-change in knowledge in Bayesian statist ....Bayesian learning for decision making in the big data era. Bayesian learning for decision making in the big data era: This fellowship project aims to develop new techniques in evidence-based learning and decision-making in the big data era. Big data has arrived, and with it a huge global demand for statistical knowledge and skills to analyse these data for improved learning and decision-making. This project will seek to address this need by creating a step-change in knowledge in Bayesian statistics and translating this knowledge to real-world challenges in industry, environment and health. The new big data statistical analysts trained through the project could also create much needed capacity at national and international levels.Read moreRead less
Classification methods for providing personalised and class decisions. This project provides a novel approach to the clustering of multivariate samples on entities in a class that automatically matches the sample clusters across the entities, allowing for inter-sample variation between the samples in a class. The project aims to develop a widely applicable, mixture-model-based framework for the simultaneous clustering of multivariate samples with inter-sample variation in a class and for the mat ....Classification methods for providing personalised and class decisions. This project provides a novel approach to the clustering of multivariate samples on entities in a class that automatically matches the sample clusters across the entities, allowing for inter-sample variation between the samples in a class. The project aims to develop a widely applicable, mixture-model-based framework for the simultaneous clustering of multivariate samples with inter-sample variation in a class and for the matching of the clusters across the entities in the class. The project will use a statistical approach to automatically match the clusters, since the overall mixture model provides a template for the class. It will provide a basis for discriminating between different classes in addition to the identification of atypical data points within a sample and of anomalous samples within a class. Key applications include biological image analysis and the analysis of data in flow cytometry which is one of the fundamental research tools for the life scientist.Read moreRead less
Statistical methods for quantifying variation in spatiotemporal areal data. This project aims to develop new statistical methods for extracting insights into spatial and temporal variation in areal data. These tools will extend the Australian Cancer Atlas which provides small area estimates for 20 cancers across Australia. The project is significant because it will allow government and other organisations to reap dividends from investment in collecting spatial information and it will enable mode ....Statistical methods for quantifying variation in spatiotemporal areal data. This project aims to develop new statistical methods for extracting insights into spatial and temporal variation in areal data. These tools will extend the Australian Cancer Atlas which provides small area estimates for 20 cancers across Australia. The project is significant because it will allow government and other organisations to reap dividends from investment in collecting spatial information and it will enable modelled small-area estimates to be released without compromising confidentiality. The expected outcomes include new statistical knowledge and new insights into cancer. The results will benefit the many disciplines, managers and policy makers that make decisions based on geographic data mapped over space and time. Read moreRead less
Centre for Mathematical and Statistical Modelling of Complex Systems. This Centre, formed by a group of high-profile researchers, brings expertise from linked but hitherto disparate areas together. It will place Australia at the forefront of research into complex systems.
The mission of the Centre is to stimulate research in mathematical and statistical modelling of complex systems and to encourage cross-fertilisation of ideas and techniques. The specific objectives are
- to formulate and ana ....Centre for Mathematical and Statistical Modelling of Complex Systems. This Centre, formed by a group of high-profile researchers, brings expertise from linked but hitherto disparate areas together. It will place Australia at the forefront of research into complex systems.
The mission of the Centre is to stimulate research in mathematical and statistical modelling of complex systems and to encourage cross-fertilisation of ideas and techniques. The specific objectives are
- to formulate and analyse mathematical and statistical models for natural and artificial complex systems,
- to use these models to develop an understanding of the behaviour of these systems
- to incorporate this understanding into strategies for management and control.Read moreRead less
ARC Centre of Excellence for Mathematical and Statistical Frontiers of Big Data, Big Models, New Insights. In today's world, massive amounts of data in a variety of forms are collected daily from a multitude of sources. Many of the resulting data sets have the potential to make vital contributions to society, business and government, as well as impact on international developments, but are so large or complex that they are difficult to process and analyse using traditional tools. The aim of this ....ARC Centre of Excellence for Mathematical and Statistical Frontiers of Big Data, Big Models, New Insights. In today's world, massive amounts of data in a variety of forms are collected daily from a multitude of sources. Many of the resulting data sets have the potential to make vital contributions to society, business and government, as well as impact on international developments, but are so large or complex that they are difficult to process and analyse using traditional tools. The aim of this Centre is to create innovative mathematical and statistical models that can uncover the knowledge concealed within the size and complexity of these big data sets, with a focus on using the models to deliver insight into problems vital to the Centre's Collaborative Domains: Healthy People, Sustainable Environments and Prosperous Societies.Read moreRead less
Stochastic majorization--minimization algorithms for data science. The changing nature of acquisition and storage data has made the process of drawing inference infeasible with traditional statistical and machine learning methods. Modern data are often acquired in real time, in an incremental nature, and are often available in too large a volume to process on conventional machinery. The project proposes to study the family of stochastic majorisation-minimisation algorithms for computation of inf ....Stochastic majorization--minimization algorithms for data science. The changing nature of acquisition and storage data has made the process of drawing inference infeasible with traditional statistical and machine learning methods. Modern data are often acquired in real time, in an incremental nature, and are often available in too large a volume to process on conventional machinery. The project proposes to study the family of stochastic majorisation-minimisation algorithms for computation of inferential quantities in an incremental manner. The proposed stochastic algorithms encompass and extend upon a wide variety of current algorithmic frameworks for fitting statistical and machine learning models, and can be used to produce feasible and practical algorithms for complex models, both current and future.
Read moreRead less
New Directions in Bayesian Statistics: formulation, computation and application to exemplar challenges. Bayesian statistics is a fundamental statistical and machine learning approach for density estimation, data analysis and inference. However, there remain open questions regarding the formulation of the model, the likelihood and priors, and efficient computation. This project proposes new approaches that address these issues, and applies them to two exemplar challenges: the impact of climate ch ....New Directions in Bayesian Statistics: formulation, computation and application to exemplar challenges. Bayesian statistics is a fundamental statistical and machine learning approach for density estimation, data analysis and inference. However, there remain open questions regarding the formulation of the model, the likelihood and priors, and efficient computation. This project proposes new approaches that address these issues, and applies them to two exemplar challenges: the impact of climate change on the Great Barrier Reef and better understanding neurological diseases related aging, in particular Parkinson's Disease. Read moreRead less
Discovery Early Career Researcher Award - Grant ID: DE170101134
Funder
Australian Research Council
Funding Amount
$360,000.00
Summary
Feasible algorithms for big inference. This project aims to develop algorithms for computationally-intensive statistical tools to analyse Big Data. Big Data is ubiquitous in science, engineering, industry and finance, but needs special machine learning to conduct correct inferential analysis. Computational bottlenecks make many tried-and-true tools of statistical inference inadequate. This project will develop tools including false discovery rate control, heteroscedastic and robust regression an ....Feasible algorithms for big inference. This project aims to develop algorithms for computationally-intensive statistical tools to analyse Big Data. Big Data is ubiquitous in science, engineering, industry and finance, but needs special machine learning to conduct correct inferential analysis. Computational bottlenecks make many tried-and-true tools of statistical inference inadequate. This project will develop tools including false discovery rate control, heteroscedastic and robust regression and mixture models, via Big Data-appropriate optimisation and composite-likelihood estimation. It will make open, well-documented, and accessible software available for the scalable and distributable analysis of Big Data. The expected outcome is a suite of scalable algorithms to analyse Big Data.Read moreRead less