Build competency aware and assuring machine learning systems. Recent development in machine learning (ML) has seen ML models with extremely high prediction accuracy. However, to support human-machine partnership in decision-making in complex environments, beyond accuracy, it is essential for ML systems to be competency aware and reliable, and at the same time be exploratory. This project aims to develop novel techniques to equip a ML system with the ability to identify own competency, to justify ....Build competency aware and assuring machine learning systems. Recent development in machine learning (ML) has seen ML models with extremely high prediction accuracy. However, to support human-machine partnership in decision-making in complex environments, beyond accuracy, it is essential for ML systems to be competency aware and reliable, and at the same time be exploratory. This project aims to develop novel techniques to equip a ML system with the ability to identify own competency, to justify its competency and decisions, to explore unknown situations and fully utilise existing expertise to deal with unknowns. The expected outcomes of the project will enable ML systems to become truely intelligent and reliable machine partners for human decision makers in a wide range of applications.Read moreRead less
Learning to Reason in Reinforcement Learning. Deep Reinforcement Learning (RL) uses deep neural networks to represent and learn optimal decision-making policies for intelligent agents in complex environments. However, most RL approaches require millions of episodes to converge to good policies, making it difficult for RL to be applied in real-world scenarios taking significant resources. This project aims to equip RL with capabilities such as counterfactual reasoning and outcome anticipation to ....Learning to Reason in Reinforcement Learning. Deep Reinforcement Learning (RL) uses deep neural networks to represent and learn optimal decision-making policies for intelligent agents in complex environments. However, most RL approaches require millions of episodes to converge to good policies, making it difficult for RL to be applied in real-world scenarios taking significant resources. This project aims to equip RL with capabilities such as counterfactual reasoning and outcome anticipation to significantly reduce the number of interactions required, improve generalisation, and provide the agent with the capability to consider the cause-effects. These improvements would narrow the gap between AI and human capabilities and broaden the adoption of RL in real-world applications.Read moreRead less