Australian Laureate Fellowships - Grant ID: FL110100281
Funder
Australian Research Council
Funding Amount
$2,777,066.00
Summary
Large-scale statistical machine learning. This research program aims to develop the science behind statistical decision problems as varied as web retrieval, genomic data analysis and financial portfolio optimisation. Advances will have a very significant practical impact in the many areas of science and technology that need to make sense of large, complex data streams.
A Novel Approach to Semi-Supervised Statistical Machine Learning. Recent successes in the construction of classifiers for making diagnoses and predictions are due in part to their using much data labelled with respect to their class of origin. But typically there are little labelled data but plentiful unlabelled data. The goal of semi-supervised learning (SSL) is to leverage large amounts of unlabelled data to improve the performance using only small labelled datasets and so SSL is of paramount ....A Novel Approach to Semi-Supervised Statistical Machine Learning. Recent successes in the construction of classifiers for making diagnoses and predictions are due in part to their using much data labelled with respect to their class of origin. But typically there are little labelled data but plentiful unlabelled data. The goal of semi-supervised learning (SSL) is to leverage large amounts of unlabelled data to improve the performance using only small labelled datasets and so SSL is of paramount importance to applications where it is expensive or impractical to obtain much labelled data. The project is to develop a novel SSL approach that adopts a missingness mechanism for the missing labels to build a classifier that not only improves accuracy but it can be greater than if the missing labels were known.
Read moreRead less
Advanced Mixture Models for the Analysis of Modern-Day Data. Extracting key information from huge data sets is critical to the scientific successes of the future. This project will develop novel mixture models that can be used directly to analyse complex and high-dimensional data sets that may consist of thousands of variables observed on only a limited number of entities. In order to handle the challenging problems arising in the latter situation. This project develops mixtures of factor models ....Advanced Mixture Models for the Analysis of Modern-Day Data. Extracting key information from huge data sets is critical to the scientific successes of the future. This project will develop novel mixture models that can be used directly to analyse complex and high-dimensional data sets that may consist of thousands of variables observed on only a limited number of entities. In order to handle the challenging problems arising in the latter situation. This project develops mixtures of factor models with options for skew distributions that can be used to effectively analyse such data. Key applications include the domains of bioinformatics, biostatistics, business, data mining, economics, finance, image analysis, marketing, and personalised medicine, among many others.Read moreRead less
Joint clustering and matching of multivariate samples across objects. The project will provide a novel and very effective approach to the clustering of multivariate samples on objects, say patients, that automatically matches the sample clusters across the objects. A key application is the matching of biologically relevant cell subtypes across patients for use in the study and the clinical diagnosis and prognosis of cancer.
Expanding the role of mixture models in statistical analyses of big data. This project aims to develop theoretical procedures to scale inference and learning algorithms to analyse big data sets. It will develop analytic tools and algorithms to analyse big data sets which classical methods of inference cannot analyse directly due to the data’s complexity or size. This will accelerate the progress of scientific discovery and innovation, leading, for example, to new fields of inquiry; to an increas ....Expanding the role of mixture models in statistical analyses of big data. This project aims to develop theoretical procedures to scale inference and learning algorithms to analyse big data sets. It will develop analytic tools and algorithms to analyse big data sets which classical methods of inference cannot analyse directly due to the data’s complexity or size. This will accelerate the progress of scientific discovery and innovation, leading, for example, to new fields of inquiry; to an increase in understanding from studies on human and social processes and interactions; and to the promotion of economic growth and improved health and quality of life. Such applications should lead to breakthrough discoveries and innovation in science, engineering, medicine, commerce, education and national security.Read moreRead less
A new approach to fast matrix factorization for the statistical analysis of high-dimensional data. Some form of dimension reduction is essential in order to extract meaningful information from huge data sets. For this purpose we provide a novel and very fast approach to the factorization of the data matrix. It has wide applicability for improving the quality and validity of research in science and medicine and in most industries in Australia.
Large-Scale Statistical Inference: Multiple Testing. Multiple testing procedures are among the most important statistical tools for the analysis of modern data. This project aims to develop new methods for providing more powerful simultaneous tests while controlling the proportion of false positive conclusions. They are proposed to be derived by the novel pooling of information in individual attribute based contrasts to produce a Weighted Individual attribute-Specific Contrast (WISC) based stati ....Large-Scale Statistical Inference: Multiple Testing. Multiple testing procedures are among the most important statistical tools for the analysis of modern data. This project aims to develop new methods for providing more powerful simultaneous tests while controlling the proportion of false positive conclusions. They are proposed to be derived by the novel pooling of information in individual attribute based contrasts to produce a Weighted Individual attribute-Specific Contrast (WISC) based statistic. They will also exploit contextual information. They are expected to be of direct application to the problem of testing for no differences between two or more classes, as in the detection of differential expression in bioinformatics. Other key applications are expected to include biomedicine, economics, finance, genetics, and neuroscience.Read moreRead less
Stochastic majorization--minimization algorithms for data science. The changing nature of acquisition and storage data has made the process of drawing inference infeasible with traditional statistical and machine learning methods. Modern data are often acquired in real time, in an incremental nature, and are often available in too large a volume to process on conventional machinery. The project proposes to study the family of stochastic majorisation-minimisation algorithms for computation of inf ....Stochastic majorization--minimization algorithms for data science. The changing nature of acquisition and storage data has made the process of drawing inference infeasible with traditional statistical and machine learning methods. Modern data are often acquired in real time, in an incremental nature, and are often available in too large a volume to process on conventional machinery. The project proposes to study the family of stochastic majorisation-minimisation algorithms for computation of inferential quantities in an incremental manner. The proposed stochastic algorithms encompass and extend upon a wide variety of current algorithmic frameworks for fitting statistical and machine learning models, and can be used to produce feasible and practical algorithms for complex models, both current and future.
Read moreRead less
System to synapse. Biological tissue is studied at the cellular and organ level with ever increasing clarity and sensitivity, but there are limitations in understanding how microscopic changes are manifested in the organ and vice versa. This project will develop new methods to bridge this gap and allow next generation correlative imaging.
Large Markov decision processes and combinatorial optimisation. Markov decision processes continue to gain in popularity for modelling a wide range of applications ranging from analysis of supply chains and queueing networks to cognitive science and control of autonomous vehicles. Nonetheless, they tend to become numerically intractable as the size of the model grows fast. Recent works use machine learning techniques to overcome this crucial issue, but with no convergence guarantee. This project ....Large Markov decision processes and combinatorial optimisation. Markov decision processes continue to gain in popularity for modelling a wide range of applications ranging from analysis of supply chains and queueing networks to cognitive science and control of autonomous vehicles. Nonetheless, they tend to become numerically intractable as the size of the model grows fast. Recent works use machine learning techniques to overcome this crucial issue, but with no convergence guarantee. This project aims to provide theoretically sound frameworks for solving large Markov decision processes, and exploit them to solve important combinatorial optimisation problems. This timely project can promote Australia's position in the development of such novel frameworks for many scientific and industrial applications.Read moreRead less