Advances in Sequential Monte Carlo Methods for Complex Bayesian Models. This project aims to develop efficient statistical algorithms for parameter estimation of complex stochastic models that currently cannot be handled. Parameter estimation is an essential component of mathematical modelling for answering scientific questions and revealing new insights. Current parameter estimation methods can be inefficient and require too much user intervention. This project will develop novel Bayesian alg ....Advances in Sequential Monte Carlo Methods for Complex Bayesian Models. This project aims to develop efficient statistical algorithms for parameter estimation of complex stochastic models that currently cannot be handled. Parameter estimation is an essential component of mathematical modelling for answering scientific questions and revealing new insights. Current parameter estimation methods can be inefficient and require too much user intervention. This project will develop novel Bayesian algorithms that are optimally automated and efficient by exploiting ever-improving parallel computing devices. The new methods will allow practitioners to process realistic models, enabling new scientific discoveries in a wide range of disciplines such as biology, ecology, agriculture, hydrology and finance.Read moreRead less
Feature Learning for High-dimensional Functional Time Series. This project aims to develop new methods and theories for common features on high-dimensional functional time series observed in empirical applications. The significance includes addressing a key gap in adaptive and efficient feature learning, improving forecasting accuracy and understanding forecasting-driven factors comprehensively for empirical data. Expected outcomes involve advances in big data theory and easy-to-implement algori ....Feature Learning for High-dimensional Functional Time Series. This project aims to develop new methods and theories for common features on high-dimensional functional time series observed in empirical applications. The significance includes addressing a key gap in adaptive and efficient feature learning, improving forecasting accuracy and understanding forecasting-driven factors comprehensively for empirical data. Expected outcomes involve advances in big data theory and easy-to-implement algorithms for applied researchers. This project benefits not only advanced manufacturing by finding optimal stopping time for wood panel compression, but also superior forecasting for mortality in demography, climate data in environmental science, asset returns in finance, and electricity consumption in economics. Read moreRead less
Principled statistical methods for high-dimensional correlation networks. This project aims to develop a novel and principled approach for building correlation networks. Correlation networks aim to identify the most significant associations present in modern massive datasets, and have numerous applications, ranging from the biomedical and environmental sciences to the social sciences. Nodes of such networks represent features, and edges represent associations, or the lack thereof. Current method ....Principled statistical methods for high-dimensional correlation networks. This project aims to develop a novel and principled approach for building correlation networks. Correlation networks aim to identify the most significant associations present in modern massive datasets, and have numerous applications, ranging from the biomedical and environmental sciences to the social sciences. Nodes of such networks represent features, and edges represent associations, or the lack thereof. Current methods are not readily scalable to modern ultra-high dimensional settings, and do not account for uncertainty in the estimated associations. This project will develop a principled, highly scalable methodology for building such networks, which incorporates uncertainty quantification. Emphasis is placed on modern ultra-high dimensional settings in which differentiating a true correlation from a spurious one is a notoriously difficult task.Read moreRead less
Inference for Hawkes processes with challenging data. The Hawkes processes are statistical models for the analysis of high-impact event sequences, such as bushfires, earthquakes, infectious diseases, and cyber attacks. When the times and/or marks are missing for some events or when the data is otherwise incomplete, it is challenging to fit these models and perform diagnostic checks on the fitted models. This project aims to develop novel statistical methods to fit these models in the presence of ....Inference for Hawkes processes with challenging data. The Hawkes processes are statistical models for the analysis of high-impact event sequences, such as bushfires, earthquakes, infectious diseases, and cyber attacks. When the times and/or marks are missing for some events or when the data is otherwise incomplete, it is challenging to fit these models and perform diagnostic checks on the fitted models. This project aims to develop novel statistical methods to fit these models in the presence of incomplete data and to check the goodness-of-fit of the fitted models. The expected outcomes include publications documenting these methods and software packages implementing them. The primary benefits include the advancement of statistical methodology and the training of junior research personnel. Read moreRead less
A Novel Approach to Semi-Supervised Statistical Machine Learning. Recent successes in the construction of classifiers for making diagnoses and predictions are due in part to their using much data labelled with respect to their class of origin. But typically there are little labelled data but plentiful unlabelled data. The goal of semi-supervised learning (SSL) is to leverage large amounts of unlabelled data to improve the performance using only small labelled datasets and so SSL is of paramount ....A Novel Approach to Semi-Supervised Statistical Machine Learning. Recent successes in the construction of classifiers for making diagnoses and predictions are due in part to their using much data labelled with respect to their class of origin. But typically there are little labelled data but plentiful unlabelled data. The goal of semi-supervised learning (SSL) is to leverage large amounts of unlabelled data to improve the performance using only small labelled datasets and so SSL is of paramount importance to applications where it is expensive or impractical to obtain much labelled data. The project is to develop a novel SSL approach that adopts a missingness mechanism for the missing labels to build a classifier that not only improves accuracy but it can be greater than if the missing labels were known.
Read moreRead less
Fast flexible feature selection for high dimensional challenging data. The project aims to provide new frameworks for fast flexible feature selection and appropriate modelling of heterogeneous data through structural varying-coefficient regression models. The outcomes will be a series of new statistical methods and concepts enabling more powerful modelling of complex bioscience data. The project will create the science for building reliable statistical models taking model uncertainty into accoun ....Fast flexible feature selection for high dimensional challenging data. The project aims to provide new frameworks for fast flexible feature selection and appropriate modelling of heterogeneous data through structural varying-coefficient regression models. The outcomes will be a series of new statistical methods and concepts enabling more powerful modelling of complex bioscience data. The project will create the science for building reliable statistical models taking model uncertainty into account, impacting how results will be interpreted, and with accompanying software. This will be a significant improvement in the assessment of model confidence in the food and health research priority areas including areas such as meat science, Huntington’s disease, and kidney transplantation.Read moreRead less
Fast approximate inference methods: new algorithms, applications and theory. This project aims to develop new algorithms and theory for fast approximate inference and lay down infrastructure to aid future extensions. Fast approximate inference methods are a principled and extensible means of fitting large and complex statistical models to big data sets. They come into their own in applications where speed is paramount and traditional approaches are not feasible. The project aims to lead to prac ....Fast approximate inference methods: new algorithms, applications and theory. This project aims to develop new algorithms and theory for fast approximate inference and lay down infrastructure to aid future extensions. Fast approximate inference methods are a principled and extensible means of fitting large and complex statistical models to big data sets. They come into their own in applications where speed is paramount and traditional approaches are not feasible. The project aims to lead to practical outcomes from better business decision-making for insurance data warehouses, to improved medical imaging technology.Read moreRead less
High-frequency Estimation of Term Structure Models at the Zero Lower Bound. This project aims to quantify monetary policy shocks as shifts of the entire term structure of interest rates, when the central bank’s policy rate is constrained at the near-zero level. The proposed method will use a high-dimensional panel of high frequency government bond data. The term structure and resultant policy shocks estimated at intra-day frequencies for major economies including Australia, will be made publicly ....High-frequency Estimation of Term Structure Models at the Zero Lower Bound. This project aims to quantify monetary policy shocks as shifts of the entire term structure of interest rates, when the central bank’s policy rate is constrained at the near-zero level. The proposed method will use a high-dimensional panel of high frequency government bond data. The term structure and resultant policy shocks estimated at intra-day frequencies for major economies including Australia, will be made publicly available. This project expects to deepen our understanding of how monetary policy decisions affect the macroeconomy in a near-zero interest-rate environment. This should provide significant benefits to policymakers for implementing and monitoring monetary policy in achieving desired economic outcomes.Read moreRead less
Large dynamic time-varying models for structural macroeconomic inference. This project aims to broaden the range of macroeconomic models that have an integrated capacity for both greater realism and efficiency in analysis. This approach will be applied to two contexts at the forefront of current macroeconomic research, the effects of noisy productivity signals on business cycles and the effects of fiscal policy shocks. Flexible macro-econometric models underpin accurate inference by economists ....Large dynamic time-varying models for structural macroeconomic inference. This project aims to broaden the range of macroeconomic models that have an integrated capacity for both greater realism and efficiency in analysis. This approach will be applied to two contexts at the forefront of current macroeconomic research, the effects of noisy productivity signals on business cycles and the effects of fiscal policy shocks. Flexible macro-econometric models underpin accurate inference by economists and policymakers and the project outputs should provide widespread and significant benefits by improving policy and boosting Australia’s comparative advantage.Read moreRead less
New methods for modelling real-world extremes. This project aims to develop new theory and methods for analysing and predicting extreme values observed in real-world processes. Many existing techniques are limited by convenient mathematical assumptions that commonly do not hold in practice: dependence at asymptotic levels, process stationarity, and that the observed data are direct measurements of the process of interest. As a result, using these techniques may produce undesirable results. Expec ....New methods for modelling real-world extremes. This project aims to develop new theory and methods for analysing and predicting extreme values observed in real-world processes. Many existing techniques are limited by convenient mathematical assumptions that commonly do not hold in practice: dependence at asymptotic levels, process stationarity, and that the observed data are direct measurements of the process of interest. As a result, using these techniques may produce undesirable results. Expected outcomes of this project include theoretically justified data analysis techniques that can accurately model extreme values seen in the real world. Project benefits include more realistic analyses of nationally important applications in climate, bushfire insurance risk, and anomaly detection.Read moreRead less