Stochastic majorization--minimization algorithms for data science. The changing nature of acquisition and storage data has made the process of drawing inference infeasible with traditional statistical and machine learning methods. Modern data are often acquired in real time, in an incremental nature, and are often available in too large a volume to process on conventional machinery. The project proposes to study the family of stochastic majorisation-minimisation algorithms for computation of inf ....Stochastic majorization--minimization algorithms for data science. The changing nature of acquisition and storage data has made the process of drawing inference infeasible with traditional statistical and machine learning methods. Modern data are often acquired in real time, in an incremental nature, and are often available in too large a volume to process on conventional machinery. The project proposes to study the family of stochastic majorisation-minimisation algorithms for computation of inferential quantities in an incremental manner. The proposed stochastic algorithms encompass and extend upon a wide variety of current algorithmic frameworks for fitting statistical and machine learning models, and can be used to produce feasible and practical algorithms for complex models, both current and future.
Read moreRead less
Large scale nonsmooth, nonconvex optimisation. This project aims to develop, analyse, test and apply (sub) gradient-based methods for solving large scale nonsmooth, nonconvex optimisation problems. Large scale problems with complex nonconvex objective and/or constraint functions are among the most difficult in optimisation. This project will generate new knowledge in numerical optimisation and machine learning. The use of structures and sparsity of large scale problems will lead to the developme ....Large scale nonsmooth, nonconvex optimisation. This project aims to develop, analyse, test and apply (sub) gradient-based methods for solving large scale nonsmooth, nonconvex optimisation problems. Large scale problems with complex nonconvex objective and/or constraint functions are among the most difficult in optimisation. This project will generate new knowledge in numerical optimisation and machine learning. The use of structures and sparsity of large scale problems will lead to the development of better models, and more accurate and robust methods. The expected outcomes of the project are ready-to-implement and apply numerical methods for solving large-scale, nonsmooth, nonconvex optimisation problems, as well as problems in machine learning and regression analysis.Read moreRead less
Discovery Early Career Researcher Award - Grant ID: DE200100063
Funder
Australian Research Council
Funding Amount
$394,398.00
Summary
Nonmonotone Algorithms in Operator Splitting, Optimisation and Data Science. This project aims to develop the mathematical foundations for the analysis and development of optimisation algorithms used in data science. Despite their now ubiquitous use, machine learning software packages routinely rely on a number of algorithms from mathematical optimisation which are not properly understood. By moving beyond the traditional realms of Fejér monotone algorithms, this project expects to develop the m ....Nonmonotone Algorithms in Operator Splitting, Optimisation and Data Science. This project aims to develop the mathematical foundations for the analysis and development of optimisation algorithms used in data science. Despite their now ubiquitous use, machine learning software packages routinely rely on a number of algorithms from mathematical optimisation which are not properly understood. By moving beyond the traditional realms of Fejér monotone algorithms, this project expects to develop the mathematical theory required to rigorously justify the use of such algorithms and thereby ensure the integrity of the decision tools they produce. This mathematical framework is also expected to produce new algorithms for optimisation which benefit consumers of data science such as the health-care and cybersecurity sectors.Read moreRead less
Discovery Early Career Researcher Award - Grant ID: DE180100923
Funder
Australian Research Council
Funding Amount
$348,575.00
Summary
Efficient second-order optimisation algorithms for learning from big data. This project aims to apply a diverse range of scientific computing techniques to design and implement new, second-order methods that can surpass first-order alternatives in the next generation of optimisation methods for large-scale machine learning (ML). Scalable optimisation methods are now an integral part ML in the presence of “big data”. While the development of efficient first-order methods has grown in the ML comm ....Efficient second-order optimisation algorithms for learning from big data. This project aims to apply a diverse range of scientific computing techniques to design and implement new, second-order methods that can surpass first-order alternatives in the next generation of optimisation methods for large-scale machine learning (ML). Scalable optimisation methods are now an integral part ML in the presence of “big data”. While the development of efficient first-order methods has grown in the ML community, second-order alternatives have largely been ignored. The project expects to facilitate the development of more effective ML algorithms for extraction of knowledge from large data sets.Read moreRead less
Beyond black-box models: interaction in eXplainable Artificial Intelligence. This project addresses a key issue in automated decision making: explaining how a decision was reached by a computer system to its users. Its aim is to progress towards a new generation of explainable decision models, which would match the performance of current black-box systems while at the same time allow for transparency and detailed interpretation of the underlying logic. This project expects to generate new knowl ....Beyond black-box models: interaction in eXplainable Artificial Intelligence. This project addresses a key issue in automated decision making: explaining how a decision was reached by a computer system to its users. Its aim is to progress towards a new generation of explainable decision models, which would match the performance of current black-box systems while at the same time allow for transparency and detailed interpretation of the underlying logic. This project expects to generate new knowledge in modelling interdependencies of decision criteria using recent advances in the theory of capacities. The expected outcomes are sophisticated but tractable models in which mutual dependencies of decision rules and criteria are treated explicitly and can be thoroughly evaluated. Read moreRead less
Discovery Early Career Researcher Award - Grant ID: DE240100674
Funder
Australian Research Council
Funding Amount
$370,237.00
Summary
New Frontiers in Large-Scale Polynomial Optimisation. Polynomial optimisation is ubiquitous in many areas of engineering and applied mathematics. The mathematical methods and algorithms used for polynomial problems of large size are not sufficiently developed, limiting their applicability for real-world problems. This project aims to develop a mathematical foundation and computational methods for large-scale polynomial optimisation. By using an innovative combination of a novel theory of algebra ....New Frontiers in Large-Scale Polynomial Optimisation. Polynomial optimisation is ubiquitous in many areas of engineering and applied mathematics. The mathematical methods and algorithms used for polynomial problems of large size are not sufficiently developed, limiting their applicability for real-world problems. This project aims to develop a mathematical foundation and computational methods for large-scale polynomial optimisation. By using an innovative combination of a novel theory of algebraic geometry and convex optimisation, this project expects to generate new knowledge and tools for solving these problems. Anticipated outcomes include a new generation of large-scale optimisation technologies, providing significant benefit to Australia's industries and international research standing.
Read moreRead less