Improving human reasoning with causal Bayes networks: a multimodal approach. This project aims to improve human causal and probabilistic reasoning about complex systems by taking a user-centric, multimodal, interactive approach. The project will explore new integrated visual and verbal ways of explaining a causal probabilistic model and its reasoning, to reduce known human reasoning difficulties, and investigate how to reduce cognitive load by prioritising the most useful user- and context-speci ....Improving human reasoning with causal Bayes networks: a multimodal approach. This project aims to improve human causal and probabilistic reasoning about complex systems by taking a user-centric, multimodal, interactive approach. The project will explore new integrated visual and verbal ways of explaining a causal probabilistic model and its reasoning, to reduce known human reasoning difficulties, and investigate how to reduce cognitive load by prioritising the most useful user- and context-specific information. Expected outcomes include novel AI methods that empower users to drive the reasoning process and strengthen trust in the system’s reasoning. Performance will be assessed in medical and legal domains, with significant potential benefits to end users from better, more transparent reasoning and decision making.Read moreRead less
Explaining the outcomes of complex computational models. This project aims to develop new algorithms that automatically generate explanations for the results produced by complex computational models. In recent times, these models have become increasingly accurate, and hence pervasive. However, the reasoning of Deep Neural Networks and Bayesian Networks, and of complex Regression models and Decision Trees is often unclear, impairing effective decision making by practitioners who use the results o ....Explaining the outcomes of complex computational models. This project aims to develop new algorithms that automatically generate explanations for the results produced by complex computational models. In recent times, these models have become increasingly accurate, and hence pervasive. However, the reasoning of Deep Neural Networks and Bayesian Networks, and of complex Regression models and Decision Trees is often unclear, impairing effective decision making by practitioners who use the results of these models or investigate the decisions made by the systems. Practical benefits of clear decision making reasoning by complex computational models include reduced risk, increased productivity and revenue, appropriate adoption of technologies including improved education for practitioners, and improved outcomes for end users. Significant benefits will be demonstrated through the evaluations with practitioners in the areas of healthcare and energy.Read moreRead less