Dynamic Load Balancing for Systems under Heavy Traffic Demand and High Task Size Variation. Current computer systems cannot cope with extremely heavy traffic demands. A solution to such a difficult problem is to dynamically balance the load across the system's servers. Several solutions have been proposed and demonstrate advances in certain limited conditions (e.g. uniform distribution). However fundamental research work must be undertaken beyond the current way of dealing with the core issues o ....Dynamic Load Balancing for Systems under Heavy Traffic Demand and High Task Size Variation. Current computer systems cannot cope with extremely heavy traffic demands. A solution to such a difficult problem is to dynamically balance the load across the system's servers. Several solutions have been proposed and demonstrate advances in certain limited conditions (e.g. uniform distribution). However fundamental research work must be undertaken beyond the current way of dealing with the core issues of load balancing. Accounting for realistic conditions is a theoretical and practical challenge. This project aims at developing theoretical and computational models for dynamic task distribution for the studied systems. The benefits include substantial improvement of the system response time.Read moreRead less
Discovery Early Career Researcher Award - Grant ID: DE140100275
Funder
Australian Research Council
Funding Amount
$392,979.00
Summary
Beyond keyword search for ranked document retrieval. This project will develop novel approaches to efficient and effective ranked text retrieval using a new class of rank-aware algorithms derived from self-indexes. These algorithms can support complex statistical calculations on the fly. Efficient algorithm design for big data is an increasingly important problem as energy costs continue to soar and can now exceed hardware costs for big data consumers such as Google. In this project, two importa ....Beyond keyword search for ranked document retrieval. This project will develop novel approaches to efficient and effective ranked text retrieval using a new class of rank-aware algorithms derived from self-indexes. These algorithms can support complex statistical calculations on the fly. Efficient algorithm design for big data is an increasingly important problem as energy costs continue to soar and can now exceed hardware costs for big data consumers such as Google. In this project, two important problems in web search are explored: real-time indexing and long-form query answering. Using self-index algorithms, this project presents a road map to move beyond simple keyword-based ranked document retrieval, thus allowing us to efficiently meet more demanding information needs of users in the next decade.Read moreRead less
Using Past Queries for Fast and Accurate Web Searching. Searching the entire Internet, or a company web site, has become a vital task for modern organisations. While there has been significant research into improving search engines through using web pages themselves, very little attention has been paid to improving web search by exploiting the vast numbers of queries that users submit to search engines each day. This project will use state of the art compression and algorithmic techniques to imp ....Using Past Queries for Fast and Accurate Web Searching. Searching the entire Internet, or a company web site, has become a vital task for modern organisations. While there has been significant research into improving search engines through using web pages themselves, very little attention has been paid to improving web search by exploiting the vast numbers of queries that users submit to search engines each day. This project will use state of the art compression and algorithmic techniques to improve the speed and accuracy of web search using data gleaned from millions of Internet queries (provided under agreement by Microsoft). Improving search engines will have a direct benefit to many Australian industries, and support the government's priority area of "smart information use".Read moreRead less
Identifying and Tracking Influential Events in Large Social Networks. This project aims to invent a novel model and techniques for identifying and tracking influential events in large and dynamic social networks in real time. The proposed model would take into account the structure and content of social networks, and the influence of events. The project also plans to develop efficient strategies for identifying and tracking events in large and dynamic social network environments based on the mod ....Identifying and Tracking Influential Events in Large Social Networks. This project aims to invent a novel model and techniques for identifying and tracking influential events in large and dynamic social networks in real time. The proposed model would take into account the structure and content of social networks, and the influence of events. The project also plans to develop efficient strategies for identifying and tracking events in large and dynamic social network environments based on the model, In particular, the project plans to investigate flexible social network query methods to make users’ event search easy. Finally the project plans to build an evaluation system to demonstrate the efficiency of the algorithms and effectiveness of the model.Read moreRead less
Efficient and effective algorithms for searching strings in secondary storage. Pattern searching is fundamental to a wide range of computing applications, including web search and bioinformatics. In this project we will develop compression algorithms and hybrid memory-disk search structures that allow fast pattern matching on sequences of textual and numeric data, including when approximate search is required.
Development and Application of Techniques for Detecting Equivalent Documents. The web is a vast collection of data, such as text and images, but contains large numbers of duplicates - the same document or picture may be present many times. Even personal collections of information, such as the documents and digital photos people keep on their home computers, often have many versions of the same item. However, detecting such duplicates is not straightforward, as they may have been edited, or may, ....Development and Application of Techniques for Detecting Equivalent Documents. The web is a vast collection of data, such as text and images, but contains large numbers of duplicates - the same document or picture may be present many times. Even personal collections of information, such as the documents and digital photos people keep on their home computers, often have many versions of the same item. However, detecting such duplicates is not straightforward, as they may have been edited, or may, for example, be shown in different forms; for example, the quality of a photo may be reduced for display on a mobile phone. In this project we plan to detect such duplicates, and use the results to improve search and management of data.Read moreRead less
Dynamic Index Maintenance for Text Search Engines. Text retrieval systems such as internet search engines use high-performance indexes to rapidly locate documents that match user queries. In recent years there have been major improvements in query evaluation and index construction techniques. As the data changes, it is necessary to keep the index up to date, but current methods for maintaining indexes are slow and costly. The aim of this project is to develop methods that provide on-the-fly u ....Dynamic Index Maintenance for Text Search Engines. Text retrieval systems such as internet search engines use high-performance indexes to rapidly locate documents that match user queries. In recent years there have been major improvements in query evaluation and index construction techniques. As the data changes, it is necessary to keep the index up to date, but current methods for maintaining indexes are slow and costly. The aim of this project is to develop methods that provide on-the-fly update at much lower cost, thereby improving the performance of text retrieval systems. This work involves both practical development and innovation in fundamental algorithms.Read moreRead less