Dynamic Load Balancing for Systems under Heavy Traffic Demand and High Task Size Variation. Current computer systems cannot cope with extremely heavy traffic demands. A solution to such a difficult problem is to dynamically balance the load across the system's servers. Several solutions have been proposed and demonstrate advances in certain limited conditions (e.g. uniform distribution). However fundamental research work must be undertaken beyond the current way of dealing with the core issues o ....Dynamic Load Balancing for Systems under Heavy Traffic Demand and High Task Size Variation. Current computer systems cannot cope with extremely heavy traffic demands. A solution to such a difficult problem is to dynamically balance the load across the system's servers. Several solutions have been proposed and demonstrate advances in certain limited conditions (e.g. uniform distribution). However fundamental research work must be undertaken beyond the current way of dealing with the core issues of load balancing. Accounting for realistic conditions is a theoretical and practical challenge. This project aims at developing theoretical and computational models for dynamic task distribution for the studied systems. The benefits include substantial improvement of the system response time.Read moreRead less
New approaches to interactive sessional search for complex tasks. This project aims to develop new tools and techniques to improve the accuracy and speed of search and data analytics for complex information tasks. There are currently no publicly available search engines which support users engaged in complex interactive search, or that allow searchers to fully control their own data and privacy. Fundamental research advances, based on understanding real user behaviour and search needs will have ....New approaches to interactive sessional search for complex tasks. This project aims to develop new tools and techniques to improve the accuracy and speed of search and data analytics for complex information tasks. There are currently no publicly available search engines which support users engaged in complex interactive search, or that allow searchers to fully control their own data and privacy. Fundamental research advances, based on understanding real user behaviour and search needs will have an impact on important academic, industrial, and government domains, including virtual assistants, health care (clinical decision support), precision medicine, eDiscovery, crime prevention, and detailed socio-economic evaluations.Read moreRead less
Efficient Algorithms for In-memory Sorting, Searching and Indexing on Modern Multi-core Cache-based and Graphics Processor Architectures. This project clearly belongs to one of the national research priority
goals, Smart Information Use. The copy-based techniques and work on sorting and searching will considerably impact the development of in-memory algorithms in cutting-edge computer architectures. Efficient suffix trees and suffix sorting have myriad applications in string-processing and will ....Efficient Algorithms for In-memory Sorting, Searching and Indexing on Modern Multi-core Cache-based and Graphics Processor Architectures. This project clearly belongs to one of the national research priority
goals, Smart Information Use. The copy-based techniques and work on sorting and searching will considerably impact the development of in-memory algorithms in cutting-edge computer architectures. Efficient suffix trees and suffix sorting have myriad applications in string-processing and will be of high interest to bioinformatics companies. The sortdex project will develop novel algorithms that will be used by enterprise search engine companies to develop applications for libraries and organisations dealing with large databases. Algorithms using the graphics processor as a co-processor have important applications in the high-growth field of computer graphics and games. Read moreRead less
Development and Application of Techniques for Detecting Equivalent Documents. The web is a vast collection of data, such as text and images, but contains large numbers of duplicates - the same document or picture may be present many times. Even personal collections of information, such as the documents and digital photos people keep on their home computers, often have many versions of the same item. However, detecting such duplicates is not straightforward, as they may have been edited, or may, ....Development and Application of Techniques for Detecting Equivalent Documents. The web is a vast collection of data, such as text and images, but contains large numbers of duplicates - the same document or picture may be present many times. Even personal collections of information, such as the documents and digital photos people keep on their home computers, often have many versions of the same item. However, detecting such duplicates is not straightforward, as they may have been edited, or may, for example, be shown in different forms; for example, the quality of a photo may be reduced for display on a mobile phone. In this project we plan to detect such duplicates, and use the results to improve search and management of data.Read moreRead less
Dynamic Index Maintenance for Text Search Engines. Text retrieval systems such as internet search engines use high-performance indexes to rapidly locate documents that match user queries. In recent years there have been major improvements in query evaluation and index construction techniques. As the data changes, it is necessary to keep the index up to date, but current methods for maintaining indexes are slow and costly. The aim of this project is to develop methods that provide on-the-fly u ....Dynamic Index Maintenance for Text Search Engines. Text retrieval systems such as internet search engines use high-performance indexes to rapidly locate documents that match user queries. In recent years there have been major improvements in query evaluation and index construction techniques. As the data changes, it is necessary to keep the index up to date, but current methods for maintaining indexes are slow and costly. The aim of this project is to develop methods that provide on-the-fly update at much lower cost, thereby improving the performance of text retrieval systems. This work involves both practical development and innovation in fundamental algorithms.Read moreRead less