A scalable and portable question-answering system. The current availability of large volumes of free text digitally stored demands the development of methodologies that can automatically find specific answers to user questions about this "unstructured" information. The goal of this project is to develop a scalable portable and domain-independent real-time natural-language question-answering system that explores the logical contents of the text. To achieve this we will fuse current approaches to ....A scalable and portable question-answering system. The current availability of large volumes of free text digitally stored demands the development of methodologies that can automatically find specific answers to user questions about this "unstructured" information. The goal of this project is to develop a scalable portable and domain-independent real-time natural-language question-answering system that explores the logical contents of the text. To achieve this we will fuse current approaches to question answering with approaches that look at the logical contents of the questions and answer candidates. A central part of the project will be the characterisation of the optimal logical forms, the determination of efficient methods to create and store sentence logical forms of potentially large volumes of text, and the treatment of difficult questions by incorporating summarisation and text generation techniques.Read moreRead less
Achieving higher availability of storage subsystems through application of a self learning expert system. In todays global business environment the management, storage and security of enterprise data (data unavailability, data loss and corruption, systems performance) has become the heart of so-called Enterprise computing. The storage subsystems increasingly have become the critical subcomponent and single point of failure. Discovering the cause of failure in complex environments involving mul ....Achieving higher availability of storage subsystems through application of a self learning expert system. In todays global business environment the management, storage and security of enterprise data (data unavailability, data loss and corruption, systems performance) has become the heart of so-called Enterprise computing. The storage subsystems increasingly have become the critical subcomponent and single point of failure. Discovering the cause of failure in complex environments involving multiple vendors, machines, software products, topologies and cultures (languages) is in many cases time consuming and difficult resulting in unacceptable systems downtime and high maintenance costs. A more sophisticated tool is needed allowing the accumulation of knowledge, the ability to deal with complexity and change, the ability to interface with unlike knowledge bases and predict solution probability based on experience and feedback. Multi-lingual support and capability through the development of a Natural Language interface would provide a functional capability suited to managing enterprise data in todays global businesses.Read moreRead less
A Layered Controlled Natural Language for Knowledge Representation. In this research project we will develop a controlled natural language for knowledge representation that has the potential to bridge the gap between fragments of natural language and formal languages. This controlled language will be based on a variety of increasing sophisticated layers, each building upon those below it by providing enhancements in expressive power. Sentences of the controlled language will be unambiguously tra ....A Layered Controlled Natural Language for Knowledge Representation. In this research project we will develop a controlled natural language for knowledge representation that has the potential to bridge the gap between fragments of natural language and formal languages. This controlled language will be based on a variety of increasing sophisticated layers, each building upon those below it by providing enhancements in expressive power. Sentences of the controlled language will be unambiguously translatable into a corresponding formal language. Anyone who can read and write English can immediately use the controlled language with the help an intelligent text editor. This technology will make it possible for non-specialists to write problem specifications in terms of the application domain without the need to formally encode the information.Read moreRead less
Incremental Knowledge Acquisition for Machine Translation from Multiple Experts. With increasing globalisation and an increasing amount of electronically available documents the need for machine translation is growing dramatically. The state-of-the-art in machine translation is still far from satisfactory. Substantial post-editing is necessary for most non-technical texts and even for many technical documents to make the translation really understandable. This project will develop a new approach ....Incremental Knowledge Acquisition for Machine Translation from Multiple Experts. With increasing globalisation and an increasing amount of electronically available documents the need for machine translation is growing dramatically. The state-of-the-art in machine translation is still far from satisfactory. Substantial post-editing is necessary for most non-technical texts and even for many technical documents to make the translation really understandable. This project will develop a new approach for buildingmachine translation systems by extending the unorthodox approach of Ripple-Down Rules, which proved very successful for building expert systems in the medical domain.It is intended to build a machine translation system by integrating the knowledge from many experts.Read moreRead less
Parsing the web: Exploiting redundancy to understand language. This project will automatically learn the grammatical structure of language by exploiting redundancy of facts, like 'Mozart was born in 1756', from a trillion words of web text. These facts will be used to understand more complex sentences. This will enable smart information use of text with grammatical information for large-scale information access for the first time. This project will strengthen Australia's world-class expertise, ....Parsing the web: Exploiting redundancy to understand language. This project will automatically learn the grammatical structure of language by exploiting redundancy of facts, like 'Mozart was born in 1756', from a trillion words of web text. These facts will be used to understand more complex sentences. This will enable smart information use of text with grammatical information for large-scale information access for the first time. This project will strengthen Australia's world-class expertise, providing opportunities for future researchers in this area. Our expanded C&C tools and trillion word corpus will be used by academics, companies and governments, in Australia and internationally, aiding applications including financial surveillance and fraud detection.
Read moreRead less
Natural Language Generation for Aboriginal Languages. Australian Aboriginal languages have a number of interesting characteristics that make them a challenge for language technology applications; as yet, there are none, unlike for the indigenous Inuit peoples of Canada and Maori of New Zealand. We will carry out a large-scale computational linguistic investigation of an Aboriginal language to create a data-to-text natural language generation system. The system will use data from Australian Rul ....Natural Language Generation for Aboriginal Languages. Australian Aboriginal languages have a number of interesting characteristics that make them a challenge for language technology applications; as yet, there are none, unlike for the indigenous Inuit peoples of Canada and Maori of New Zealand. We will carry out a large-scale computational linguistic investigation of an Aboriginal language to create a data-to-text natural language generation system. The system will use data from Australian Rules Football to automatically construct articles based on the data. This study of computational linguistics will have further national benefits through engagement of the owners of the language in the language survey, as well as generating articles that will encourage literacy and language maintenance.Read moreRead less
An knowledge-based approach to multi-document text summarisation for automated meta-analysis of the scientific literature. The biomedical sciences produce literature at an exponential rate, and the size of this knowledge base far exceeds the capacity of humans to keep up with the growth in new knowledge. This project will develop computational text summarisation methods to abstract the content of scientific journal articles reporting clinical trials, and develop multi-document summarisation meth ....An knowledge-based approach to multi-document text summarisation for automated meta-analysis of the scientific literature. The biomedical sciences produce literature at an exponential rate, and the size of this knowledge base far exceeds the capacity of humans to keep up with the growth in new knowledge. This project will develop computational text summarisation methods to abstract the content of scientific journal articles reporting clinical trials, and develop multi-document summarisation methods to synthesise these abstracts using automated statistical meta-analysis methods. These methods have broad potential to improve text-summarisation technologies in general, to profoundly enhance our ability to integrate published knowledge, and to make a highly significant and specific contribution to improving the quality of evidence used in health decision-making. Read moreRead less
Efficient data manipulation in document classification. Document Classification has an enormous relevance in an era where large amounts of textual information is available. Document Classification is based on statistical and machine learning techniques that model documents represented as points in a multidimensional space. The Computer Engineering Laboratory (CEL) has ongoing projects using neural networks and other techniques for document classification. We are developing a development environm ....Efficient data manipulation in document classification. Document Classification has an enormous relevance in an era where large amounts of textual information is available. Document Classification is based on statistical and machine learning techniques that model documents represented as points in a multidimensional space. The Computer Engineering Laboratory (CEL) has ongoing projects using neural networks and other techniques for document classification. We are developing a development environment for large classification tasks, and Prof. Lee¡¯s work will focus in managing large amounts of data for them. Using his experience in data compression, databases and web applications, he will produce a set of tools for handling Gigabytes of textual data in our classification environment.Read moreRead less