Reliable and accurate statistical solutions for modern complex data. This project aims to develop novel methods for reliable and accurate statistical modelling with modern, complex correlated and error-prone data. The project expects to make significant strides towards future-proofing statistical data analysis, equipping practitioners with a suite of robust and computationally efficient methods which provide confidence in the stability and reproducibility of results obtained, while offering guar ....Reliable and accurate statistical solutions for modern complex data. This project aims to develop novel methods for reliable and accurate statistical modelling with modern, complex correlated and error-prone data. The project expects to make significant strides towards future-proofing statistical data analysis, equipping practitioners with a suite of robust and computationally efficient methods which provide confidence in the stability and reproducibility of results obtained, while offering guarantees on their transferability over a range of populations. This will provide important benefits as they are applied in predicting endangered marine species for fisheries conservation, and in enhancing our national understanding of the relationship between education achievement and financial success. Read moreRead less
Discovery Early Career Researcher Award - Grant ID: DE240101190
Funder
Australian Research Council
Funding Amount
$451,000.00
Summary
Innovating and Validating Scalable Monte Carlo Methods. This project aims to develop innovative scalable Monte Carlo methods for statistical analysis in the presence of big data or complex mathematical models. Existing approaches to scalable Monte Carlo are only approximate, and their inaccuracies are difficult to quantify. This can have a detrimental impact on data-based decision making. The expected outcomes of this project are scalable Monte Carlo methods that are more accurate, fast and capa ....Innovating and Validating Scalable Monte Carlo Methods. This project aims to develop innovative scalable Monte Carlo methods for statistical analysis in the presence of big data or complex mathematical models. Existing approaches to scalable Monte Carlo are only approximate, and their inaccuracies are difficult to quantify. This can have a detrimental impact on data-based decision making. The expected outcomes of this project are scalable Monte Carlo methods that are more accurate, fast and capable of quantifying inaccuracies. Scientists and decision-makers will benefit from the ability to obtain timely, reliable insights for challenging applications.Read moreRead less
A Novel Approach to Semi-Supervised Statistical Machine Learning. Recent successes in the construction of classifiers for making diagnoses and predictions are due in part to their using much data labelled with respect to their class of origin. But typically there are little labelled data but plentiful unlabelled data. The goal of semi-supervised learning (SSL) is to leverage large amounts of unlabelled data to improve the performance using only small labelled datasets and so SSL is of paramount ....A Novel Approach to Semi-Supervised Statistical Machine Learning. Recent successes in the construction of classifiers for making diagnoses and predictions are due in part to their using much data labelled with respect to their class of origin. But typically there are little labelled data but plentiful unlabelled data. The goal of semi-supervised learning (SSL) is to leverage large amounts of unlabelled data to improve the performance using only small labelled datasets and so SSL is of paramount importance to applications where it is expensive or impractical to obtain much labelled data. The project is to develop a novel SSL approach that adopts a missingness mechanism for the missing labels to build a classifier that not only improves accuracy but it can be greater than if the missing labels were known.
Read moreRead less
Technology-Driven and Scalable Regression Methodology, Computing and Theory. Regression is a mainstay of data analysis, statistics, machine learning and data science but is in continual need of enhancement in the face of technological change. Scalability and flexibility for the handling of non-linear signals are fundamental to the practical utility of new regression methodology. Several streams of research aimed at confronting data from specific technologies as well as generic types of data are ....Technology-Driven and Scalable Regression Methodology, Computing and Theory. Regression is a mainstay of data analysis, statistics, machine learning and data science but is in continual need of enhancement in the face of technological change. Scalability and flexibility for the handling of non-linear signals are fundamental to the practical utility of new regression methodology. Several streams of research aimed at confronting data from specific technologies as well as generic types of data are proposed. The project is to be networked with researchers in the United States of America and aims to have Australia-based researchers providing leadership in terms of methodological, theoretical, computational and software development.Read moreRead less
Feature Learning for High-dimensional Functional Time Series. This project aims to develop new methods and theories for common features on high-dimensional functional time series observed in empirical applications. The significance includes addressing a key gap in adaptive and efficient feature learning, improving forecasting accuracy and understanding forecasting-driven factors comprehensively for empirical data. Expected outcomes involve advances in big data theory and easy-to-implement algori ....Feature Learning for High-dimensional Functional Time Series. This project aims to develop new methods and theories for common features on high-dimensional functional time series observed in empirical applications. The significance includes addressing a key gap in adaptive and efficient feature learning, improving forecasting accuracy and understanding forecasting-driven factors comprehensively for empirical data. Expected outcomes involve advances in big data theory and easy-to-implement algorithms for applied researchers. This project benefits not only advanced manufacturing by finding optimal stopping time for wood panel compression, but also superior forecasting for mortality in demography, climate data in environmental science, asset returns in finance, and electricity consumption in economics. Read moreRead less
Mitigating bias in statistical analyses of data collected over time. This project aims to develop innovative nonparametric distribution and regression curve estimation techniques from data collected over time. These curves are key statistical tools for describing populations, but often, their estimators are inefficient when the data are massive, growing and change over time, or too restrictive when the data exhibit measurement errors and a fraction of them are equal to zero. The project expects ....Mitigating bias in statistical analyses of data collected over time. This project aims to develop innovative nonparametric distribution and regression curve estimation techniques from data collected over time. These curves are key statistical tools for describing populations, but often, their estimators are inefficient when the data are massive, growing and change over time, or too restrictive when the data exhibit measurement errors and a fraction of them are equal to zero. The project expects to develop novel, less restrictive and more realistic nonparametric curve estimation methods in these complex settings. Outcomes include new practical statistical methods and software to benefit experts in diverse fields from nutrition and epidemiology, to environmental science and digital platforms, amongst others.Read moreRead less
Inference for Hawkes processes with challenging data. The Hawkes processes are statistical models for the analysis of high-impact event sequences, such as bushfires, earthquakes, infectious diseases, and cyber attacks. When the times and/or marks are missing for some events or when the data is otherwise incomplete, it is challenging to fit these models and perform diagnostic checks on the fitted models. This project aims to develop novel statistical methods to fit these models in the presence of ....Inference for Hawkes processes with challenging data. The Hawkes processes are statistical models for the analysis of high-impact event sequences, such as bushfires, earthquakes, infectious diseases, and cyber attacks. When the times and/or marks are missing for some events or when the data is otherwise incomplete, it is challenging to fit these models and perform diagnostic checks on the fitted models. This project aims to develop novel statistical methods to fit these models in the presence of incomplete data and to check the goodness-of-fit of the fitted models. The expected outcomes include publications documenting these methods and software packages implementing them. The primary benefits include the advancement of statistical methodology and the training of junior research personnel. Read moreRead less