Mixture models for high-dimensional clustering with applications to tumour classification, network intrusion, and text classification. This project will benefit the Australian Society as a whole by developing statistical methodology for the clustering of high-dimensional data. In particular, it will develop a novel and efficient model for extracting useful information from subpopulations. It thus has wide applicability to improving the quality and validity of applied research in most industries ....Mixture models for high-dimensional clustering with applications to tumour classification, network intrusion, and text classification. This project will benefit the Australian Society as a whole by developing statistical methodology for the clustering of high-dimensional data. In particular, it will develop a novel and efficient model for extracting useful information from subpopulations. It thus has wide applicability to improving the quality and validity of applied research in most industries in Australia. More specifically, it is to be applied here to classify brain tumours and detect network intruders. This cross-disciplinary project will contribute to Australia's economic of public health, protect Australia from crime, and strength Australian researchers' capacity and capability of participating in this emerging science.Read moreRead less
Market segmentation methodology: attacking the 'Too Hard' basket. Businesses embrace market segmentation to identify and target clients. However, poor segmentation analysis leads to poor segment choice. This project will develop tools to improve segmentation analysis and will test the resulting tools in tourism, foster care and climate change mitigating behaviours, and produce usable, transferable recommendations.
Discovery Early Career Researcher Award - Grant ID: DE170101134
Funder
Australian Research Council
Funding Amount
$360,000.00
Summary
Feasible algorithms for big inference. This project aims to develop algorithms for computationally-intensive statistical tools to analyse Big Data. Big Data is ubiquitous in science, engineering, industry and finance, but needs special machine learning to conduct correct inferential analysis. Computational bottlenecks make many tried-and-true tools of statistical inference inadequate. This project will develop tools including false discovery rate control, heteroscedastic and robust regression an ....Feasible algorithms for big inference. This project aims to develop algorithms for computationally-intensive statistical tools to analyse Big Data. Big Data is ubiquitous in science, engineering, industry and finance, but needs special machine learning to conduct correct inferential analysis. Computational bottlenecks make many tried-and-true tools of statistical inference inadequate. This project will develop tools including false discovery rate control, heteroscedastic and robust regression and mixture models, via Big Data-appropriate optimisation and composite-likelihood estimation. It will make open, well-documented, and accessible software available for the scalable and distributable analysis of Big Data. The expected outcome is a suite of scalable algorithms to analyse Big Data.Read moreRead less