Stochastic majorization--minimization algorithms for data science. The changing nature of acquisition and storage data has made the process of drawing inference infeasible with traditional statistical and machine learning methods. Modern data are often acquired in real time, in an incremental nature, and are often available in too large a volume to process on conventional machinery. The project proposes to study the family of stochastic majorisation-minimisation algorithms for computation of inf ....Stochastic majorization--minimization algorithms for data science. The changing nature of acquisition and storage data has made the process of drawing inference infeasible with traditional statistical and machine learning methods. Modern data are often acquired in real time, in an incremental nature, and are often available in too large a volume to process on conventional machinery. The project proposes to study the family of stochastic majorisation-minimisation algorithms for computation of inferential quantities in an incremental manner. The proposed stochastic algorithms encompass and extend upon a wide variety of current algorithmic frameworks for fitting statistical and machine learning models, and can be used to produce feasible and practical algorithms for complex models, both current and future.
Read moreRead less
Discovery Early Career Researcher Award - Grant ID: DE180100923
Funder
Australian Research Council
Funding Amount
$348,575.00
Summary
Efficient second-order optimisation algorithms for learning from big data. This project aims to apply a diverse range of scientific computing techniques to design and implement new, second-order methods that can surpass first-order alternatives in the next generation of optimisation methods for large-scale machine learning (ML). Scalable optimisation methods are now an integral part ML in the presence of “big data”. While the development of efficient first-order methods has grown in the ML comm ....Efficient second-order optimisation algorithms for learning from big data. This project aims to apply a diverse range of scientific computing techniques to design and implement new, second-order methods that can surpass first-order alternatives in the next generation of optimisation methods for large-scale machine learning (ML). Scalable optimisation methods are now an integral part ML in the presence of “big data”. While the development of efficient first-order methods has grown in the ML community, second-order alternatives have largely been ignored. The project expects to facilitate the development of more effective ML algorithms for extraction of knowledge from large data sets.Read moreRead less