Discovery Early Career Researcher Award - Grant ID: DE200100063
Funder
Australian Research Council
Funding Amount
$394,398.00
Summary
Nonmonotone Algorithms in Operator Splitting, Optimisation and Data Science. This project aims to develop the mathematical foundations for the analysis and development of optimisation algorithms used in data science. Despite their now ubiquitous use, machine learning software packages routinely rely on a number of algorithms from mathematical optimisation which are not properly understood. By moving beyond the traditional realms of Fejér monotone algorithms, this project expects to develop the m ....Nonmonotone Algorithms in Operator Splitting, Optimisation and Data Science. This project aims to develop the mathematical foundations for the analysis and development of optimisation algorithms used in data science. Despite their now ubiquitous use, machine learning software packages routinely rely on a number of algorithms from mathematical optimisation which are not properly understood. By moving beyond the traditional realms of Fejér monotone algorithms, this project expects to develop the mathematical theory required to rigorously justify the use of such algorithms and thereby ensure the integrity of the decision tools they produce. This mathematical framework is also expected to produce new algorithms for optimisation which benefit consumers of data science such as the health-care and cybersecurity sectors.Read moreRead less
Beyond black-box models: interaction in eXplainable Artificial Intelligence. This project addresses a key issue in automated decision making: explaining how a decision was reached by a computer system to its users. Its aim is to progress towards a new generation of explainable decision models, which would match the performance of current black-box systems while at the same time allow for transparency and detailed interpretation of the underlying logic. This project expects to generate new knowl ....Beyond black-box models: interaction in eXplainable Artificial Intelligence. This project addresses a key issue in automated decision making: explaining how a decision was reached by a computer system to its users. Its aim is to progress towards a new generation of explainable decision models, which would match the performance of current black-box systems while at the same time allow for transparency and detailed interpretation of the underlying logic. This project expects to generate new knowledge in modelling interdependencies of decision criteria using recent advances in the theory of capacities. The expected outcomes are sophisticated but tractable models in which mutual dependencies of decision rules and criteria are treated explicitly and can be thoroughly evaluated. Read moreRead less