Making human place knowledge digestible by computers. This project aims to develop the tools that will enable people to interact intuitively with computers about places and the relations between places. People understand their environment in a different way to computers; they think of places and their relations, while computers use coordinates and maps. People’s interaction with maps is cognitively costly and error-prone, which is becoming untenable in situations needing time-critical decision m ....Making human place knowledge digestible by computers. This project aims to develop the tools that will enable people to interact intuitively with computers about places and the relations between places. People understand their environment in a different way to computers; they think of places and their relations, while computers use coordinates and maps. People’s interaction with maps is cognitively costly and error-prone, which is becoming untenable in situations needing time-critical decision making. The project will revolutionise the design of information services where computers deal with humans and location in time-critical or stressful situations, including emergency calls, disaster response and local search queries. The uptake of this design by industry will lead to economic benefits as well as a safer society living in a smarter environment.Read moreRead less
Beyond the grammar checker: automated copy-editing assistance. In the traditional publishing process, copy-editors correct and polish what authors write, but financial pressures mean that copy-editing is often considered a luxury. This project uses natural language processing and artificial intelligence techniques to develop technology that automates a significant proportion of the copy editing task.
Learning Deep Semantics for Automatic Translation between Human Languages. This project seeks to integrate deep linguistics and deep learning to improve translation quality. The modern world relies increasingly on automatic translation of human languages to deal with billions of documents. Current translation systems struggle with complex texts and often produce misleading or incoherent outputs. Furthermore, they translate sentences independently and ignore their overall document-wide context. T ....Learning Deep Semantics for Automatic Translation between Human Languages. This project seeks to integrate deep linguistics and deep learning to improve translation quality. The modern world relies increasingly on automatic translation of human languages to deal with billions of documents. Current translation systems struggle with complex texts and often produce misleading or incoherent outputs. Furthermore, they translate sentences independently and ignore their overall document-wide context. This project seeks to address these issues by developing a new approach using semantics – the underlying meaning of the text – to drive translation, both as discrete structures and continuous representations learned via deep learning. This may improve translation quality, thereby improving automatic translation for end-users.Read moreRead less
Incremental syntactic parsing and coreference resolution. As computers become smaller, keyboards and screens become increasingly impractical. We'd like to be able to talk to our computers, but they'd have to understand what we say. This project will develop a computational model that tracks which things are talked about and identifies 'who did what to whom' in text or speech.
Improved syntactic and semantic analysis for natural language processing. This project aims to improve the accuracy of syntactic and semantic analysis of natural language for automatic extraction of meaning from text. Many data mining and information extraction applications rely on syntactic and semantic analysis. Current analysis approaches are limited because they require expensive manually-labelled data. The project plans to develop new indirectly-supervised approaches to overcome this labell ....Improved syntactic and semantic analysis for natural language processing. This project aims to improve the accuracy of syntactic and semantic analysis of natural language for automatic extraction of meaning from text. Many data mining and information extraction applications rely on syntactic and semantic analysis. Current analysis approaches are limited because they require expensive manually-labelled data. The project plans to develop new indirectly-supervised approaches to overcome this labelled data bottleneck. By integrating information from large text corpora and structured databases, the project aims to minimise the reliance on manually-labelled data for training natural language processing systems. Automatic methods for syntactic and semantic analysis would have a wide range of applications in extracting information from large collections of unstructured data, such as hospital patient records or social media.Read moreRead less
Language engineering in the field: preserving 100 endangered languages in New Guinea. Efforts to preserve the world's endangered linguistic heritage are labour-intensive, and unable to keep up with the pace of language loss. This project investigates a new approach to language preservation, using techniques from language engineering, and leveraging the labour of mother-tongue speakers.
Computational models of synergies in human language acquisition. How do children learn language? Do they first learn to recognise words and then associate words with meanings, or do they use the meanings to figure out what the words are, or do they do both at the same time, and if so, how? This project will investigate questions like these using advanced computational models of the way children learn from their environment.
Responding to requests and situations in assistive computer systems - a decision-theoretic approach. This project aims to enable computer agents to respond appropriately to people's spoken requests and circumstances (e.g., ask questions or perform actions). This project will investigate computational models for response generation, which will be implemented in assistive computer systems, thus enabling people to interact more easily with these systems.
Towards realistic verbal interactions between people and computers-a probabilistic approach. This project aims to facilitate natural spoken interactions between people and computer systems, addressing obstacles to the acceptance of these systems. We will investigate computational models for relevant aspects of spoken dialogue, which will be implemented in computer systems for diverse tasks (for example, home devices and phone-enabled services).
Explaining the outcomes of complex computational models. This project aims to develop new algorithms that automatically generate explanations for the results produced by complex computational models. In recent times, these models have become increasingly accurate, and hence pervasive. However, the reasoning of Deep Neural Networks and Bayesian Networks, and of complex Regression models and Decision Trees is often unclear, impairing effective decision making by practitioners who use the results o ....Explaining the outcomes of complex computational models. This project aims to develop new algorithms that automatically generate explanations for the results produced by complex computational models. In recent times, these models have become increasingly accurate, and hence pervasive. However, the reasoning of Deep Neural Networks and Bayesian Networks, and of complex Regression models and Decision Trees is often unclear, impairing effective decision making by practitioners who use the results of these models or investigate the decisions made by the systems. Practical benefits of clear decision making reasoning by complex computational models include reduced risk, increased productivity and revenue, appropriate adoption of technologies including improved education for practitioners, and improved outcomes for end users. Significant benefits will be demonstrated through the evaluations with practitioners in the areas of healthcare and energy.Read moreRead less