A Novel Approach to Semi-Supervised Statistical Machine Learning. Recent successes in the construction of classifiers for making diagnoses and predictions are due in part to their using much data labelled with respect to their class of origin. But typically there are little labelled data but plentiful unlabelled data. The goal of semi-supervised learning (SSL) is to leverage large amounts of unlabelled data to improve the performance using only small labelled datasets and so SSL is of paramount ....A Novel Approach to Semi-Supervised Statistical Machine Learning. Recent successes in the construction of classifiers for making diagnoses and predictions are due in part to their using much data labelled with respect to their class of origin. But typically there are little labelled data but plentiful unlabelled data. The goal of semi-supervised learning (SSL) is to leverage large amounts of unlabelled data to improve the performance using only small labelled datasets and so SSL is of paramount importance to applications where it is expensive or impractical to obtain much labelled data. The project is to develop a novel SSL approach that adopts a missingness mechanism for the missing labels to build a classifier that not only improves accuracy but it can be greater than if the missing labels were known.
Read moreRead less
Advanced Mixture Models for the Analysis of Modern-Day Data. Extracting key information from huge data sets is critical to the scientific successes of the future. This project will develop novel mixture models that can be used directly to analyse complex and high-dimensional data sets that may consist of thousands of variables observed on only a limited number of entities. In order to handle the challenging problems arising in the latter situation. This project develops mixtures of factor models ....Advanced Mixture Models for the Analysis of Modern-Day Data. Extracting key information from huge data sets is critical to the scientific successes of the future. This project will develop novel mixture models that can be used directly to analyse complex and high-dimensional data sets that may consist of thousands of variables observed on only a limited number of entities. In order to handle the challenging problems arising in the latter situation. This project develops mixtures of factor models with options for skew distributions that can be used to effectively analyse such data. Key applications include the domains of bioinformatics, biostatistics, business, data mining, economics, finance, image analysis, marketing, and personalised medicine, among many others.Read moreRead less
Joint clustering and matching of multivariate samples across objects. The project will provide a novel and very effective approach to the clustering of multivariate samples on objects, say patients, that automatically matches the sample clusters across the objects. A key application is the matching of biologically relevant cell subtypes across patients for use in the study and the clinical diagnosis and prognosis of cancer.
Expanding the role of mixture models in statistical analyses of big data. This project aims to develop theoretical procedures to scale inference and learning algorithms to analyse big data sets. It will develop analytic tools and algorithms to analyse big data sets which classical methods of inference cannot analyse directly due to the data’s complexity or size. This will accelerate the progress of scientific discovery and innovation, leading, for example, to new fields of inquiry; to an increas ....Expanding the role of mixture models in statistical analyses of big data. This project aims to develop theoretical procedures to scale inference and learning algorithms to analyse big data sets. It will develop analytic tools and algorithms to analyse big data sets which classical methods of inference cannot analyse directly due to the data’s complexity or size. This will accelerate the progress of scientific discovery and innovation, leading, for example, to new fields of inquiry; to an increase in understanding from studies on human and social processes and interactions; and to the promotion of economic growth and improved health and quality of life. Such applications should lead to breakthrough discoveries and innovation in science, engineering, medicine, commerce, education and national security.Read moreRead less
A new approach to fast matrix factorization for the statistical analysis of high-dimensional data. Some form of dimension reduction is essential in order to extract meaningful information from huge data sets. For this purpose we provide a novel and very fast approach to the factorization of the data matrix. It has wide applicability for improving the quality and validity of research in science and medicine and in most industries in Australia.
Large-Scale Statistical Inference: Multiple Testing. Multiple testing procedures are among the most important statistical tools for the analysis of modern data. This project aims to develop new methods for providing more powerful simultaneous tests while controlling the proportion of false positive conclusions. They are proposed to be derived by the novel pooling of information in individual attribute based contrasts to produce a Weighted Individual attribute-Specific Contrast (WISC) based stati ....Large-Scale Statistical Inference: Multiple Testing. Multiple testing procedures are among the most important statistical tools for the analysis of modern data. This project aims to develop new methods for providing more powerful simultaneous tests while controlling the proportion of false positive conclusions. They are proposed to be derived by the novel pooling of information in individual attribute based contrasts to produce a Weighted Individual attribute-Specific Contrast (WISC) based statistic. They will also exploit contextual information. They are expected to be of direct application to the problem of testing for no differences between two or more classes, as in the detection of differential expression in bioinformatics. Other key applications are expected to include biomedicine, economics, finance, genetics, and neuroscience.Read moreRead less
Stochastic majorization--minimization algorithms for data science. The changing nature of acquisition and storage data has made the process of drawing inference infeasible with traditional statistical and machine learning methods. Modern data are often acquired in real time, in an incremental nature, and are often available in too large a volume to process on conventional machinery. The project proposes to study the family of stochastic majorisation-minimisation algorithms for computation of inf ....Stochastic majorization--minimization algorithms for data science. The changing nature of acquisition and storage data has made the process of drawing inference infeasible with traditional statistical and machine learning methods. Modern data are often acquired in real time, in an incremental nature, and are often available in too large a volume to process on conventional machinery. The project proposes to study the family of stochastic majorisation-minimisation algorithms for computation of inferential quantities in an incremental manner. The proposed stochastic algorithms encompass and extend upon a wide variety of current algorithmic frameworks for fitting statistical and machine learning models, and can be used to produce feasible and practical algorithms for complex models, both current and future.
Read moreRead less
Large Markov decision processes and combinatorial optimisation. Markov decision processes continue to gain in popularity for modelling a wide range of applications ranging from analysis of supply chains and queueing networks to cognitive science and control of autonomous vehicles. Nonetheless, they tend to become numerically intractable as the size of the model grows fast. Recent works use machine learning techniques to overcome this crucial issue, but with no convergence guarantee. This project ....Large Markov decision processes and combinatorial optimisation. Markov decision processes continue to gain in popularity for modelling a wide range of applications ranging from analysis of supply chains and queueing networks to cognitive science and control of autonomous vehicles. Nonetheless, they tend to become numerically intractable as the size of the model grows fast. Recent works use machine learning techniques to overcome this crucial issue, but with no convergence guarantee. This project aims to provide theoretically sound frameworks for solving large Markov decision processes, and exploit them to solve important combinatorial optimisation problems. This timely project can promote Australia's position in the development of such novel frameworks for many scientific and industrial applications.Read moreRead less
Discovery Early Career Researcher Award - Grant ID: DE170101134
Funder
Australian Research Council
Funding Amount
$360,000.00
Summary
Feasible algorithms for big inference. This project aims to develop algorithms for computationally-intensive statistical tools to analyse Big Data. Big Data is ubiquitous in science, engineering, industry and finance, but needs special machine learning to conduct correct inferential analysis. Computational bottlenecks make many tried-and-true tools of statistical inference inadequate. This project will develop tools including false discovery rate control, heteroscedastic and robust regression an ....Feasible algorithms for big inference. This project aims to develop algorithms for computationally-intensive statistical tools to analyse Big Data. Big Data is ubiquitous in science, engineering, industry and finance, but needs special machine learning to conduct correct inferential analysis. Computational bottlenecks make many tried-and-true tools of statistical inference inadequate. This project will develop tools including false discovery rate control, heteroscedastic and robust regression and mixture models, via Big Data-appropriate optimisation and composite-likelihood estimation. It will make open, well-documented, and accessible software available for the scalable and distributable analysis of Big Data. The expected outcome is a suite of scalable algorithms to analyse Big Data.Read moreRead less
Improving Productivity and Efficiency of Australian Airports – A Real Time Analytics and Statistical Approach. Aviation is a major economic driver both within Australia and overseas, but the aviation industry faces growing challenges from the increase in passengers and changing regulations. To meet these challenges, airports, airlines, government agencies and others need to maximise their efficiency and productivity; however, complex dependencies and differing operational objectives complicate t ....Improving Productivity and Efficiency of Australian Airports – A Real Time Analytics and Statistical Approach. Aviation is a major economic driver both within Australia and overseas, but the aviation industry faces growing challenges from the increase in passengers and changing regulations. To meet these challenges, airports, airlines, government agencies and others need to maximise their efficiency and productivity; however, complex dependencies and differing operational objectives complicate this task. This project aims to develop a real-time, whole-of-system operational performance framework that can help operators in finding and evaluating solutions to maximise throughput, reduce wait times and mitigate flow-on effects. Innovative new video analytic and Bayesian Network based tools are integrated to address the challenges of adaptability and uncertainty.Read moreRead less