Approximate structures for efficient processing of data streams. This project aims to increase the volume of streamed data that can be handled on a low-powered device with limited memory. In finance, health, and transport, data arrives at enormous rates, and data-driven decisions must be made quickly. Likewise, to keep Australia secure, national agencies monitor and gather vast data sets. Increasingly, devices and monitors that have limited resources are making these decisions and they require c ....Approximate structures for efficient processing of data streams. This project aims to increase the volume of streamed data that can be handled on a low-powered device with limited memory. In finance, health, and transport, data arrives at enormous rates, and data-driven decisions must be made quickly. Likewise, to keep Australia secure, national agencies monitor and gather vast data sets. Increasingly, devices and monitors that have limited resources are making these decisions and they require computational techniques that run extremely efficiently. The project expects to develop and improve approximate data structures that operate in tight resource bounds. Anticipated outcomes are improved event recognition and dramatic speedup in analysis of streams in areas such as finance, health, transport, and urban data.Read moreRead less
Data retrieval from massive information structures. Information search is an essential tool. But most current services regard the data as unstructured collections of independent documents, free of context. Next-generation search applications, such as over social networks, or corporate websites, or XML data sets, must account for the inherent relationships between data items, and must allow the efficient inclusion of search context. Queries should favour semantically local data, giving results th ....Data retrieval from massive information structures. Information search is an essential tool. But most current services regard the data as unstructured collections of independent documents, free of context. Next-generation search applications, such as over social networks, or corporate websites, or XML data sets, must account for the inherent relationships between data items, and must allow the efficient inclusion of search context. Queries should favour semantically local data, giving results that depend on the perceived state of the querier. This project will develop indexing and search techniques for massive structured data sets. The new search methods will incorporate theoretical advances and will be experimentally validated using industry-standard open-source distributed systems.Read moreRead less
Next-generation techniques for analysing massive data sets. To process enormous amounts of data, leading computing companies are turning to modern computing frameworks, for which little theory of efficient computational techniques has been developed. This project will resolve key theoretical questions and provide fast techniques for poorly understood pattern recognition and bioinformatics problems.
On effectively modelling and efficiently discovering communities from large networks. Finding and maintaining close communities from very large scale, dynamically changing networks is interesting and challenging. This project aims to develop new techniques to identify such communities as fast as possible through exploiting the rich semantics and individual relationships within the communities.
Scalable biocomputing on networks: design and mathematical foundations. This project aims to develop technology with the potential to disrupt computation by providing a way to solve combinatorial mathematical problems in an efficient manner. Electronic computers have revolutionised our lives over the last half-century, but there are tasks they can not do, usually those requiring multi-tasking, much as our brains do. This project aims to overcome some of these problems by physically using molecul ....Scalable biocomputing on networks: design and mathematical foundations. This project aims to develop technology with the potential to disrupt computation by providing a way to solve combinatorial mathematical problems in an efficient manner. Electronic computers have revolutionised our lives over the last half-century, but there are tasks they can not do, usually those requiring multi-tasking, much as our brains do. This project aims to overcome some of these problems by physically using molecular parts of living things moving within specially mathematically designed networks to solve, in parallel, "combinatorial" mathematical problems that vex traditional computers, while using far less energy than electronic devices. This project expects to develop this nascent field into a practically useful, disruptive technology based in Australia.Read moreRead less
Homomorphic cryptography: computing on encrypted data. This project is driven by the groundbreaking applications of a new cryptographic technology that allows analysis of encrypted (scrambled) data without needing to decrypt (unscramble) it first. The results of this project can be used to enable secure remote data storage, electronic auctions and voting, and protecting medical records.
Visual analytics for massive multivariate networks. Visual analytics for massive multivariate networks. This project aims to create methods to visually analyse massive multivariate networks. The amount of network data available has exploded in recent years: software systems, social networks and biological systems have millions of nodes and billions of edges with multivariate attributes. Their size and complexity makes these data sets hard to exploit. More efficient ways to understand the data ar ....Visual analytics for massive multivariate networks. Visual analytics for massive multivariate networks. This project aims to create methods to visually analyse massive multivariate networks. The amount of network data available has exploded in recent years: software systems, social networks and biological systems have millions of nodes and billions of edges with multivariate attributes. Their size and complexity makes these data sets hard to exploit. More efficient ways to understand the data are needed. This project will design, implement and evaluate visualisation methods for massive multivariate network data sets. This research is expected to be used by Australian software development, biotechnology and security companies to exploit their data.Read moreRead less
Algorithmic engineering and complexity analysis of protocols for consensus. Opinions, rankings, observations, votes, gene sequences, sensor-networks in security systems or climate models. Massive datasets and the ability to share information at unprecedented speeds, makes finding the most central representative, the Consensus Problem, extremely complex. This research delivers new insights and new, efficient algorithms.
Efficient structure search over large graphs. The project aims to develop advanced search technology to support large-scale graph applications. The success of the project not only brings a breakthrough in technology development but also provides training for high quality personnel in this important and growing area, and brings considerable economic and social benefits to Australia.
Visual analytics for high volume multi attribute financial data streams. While our ability to accumulate data (such as financial data) is increasing, our capability to analyse them is still inadequate despite technological improvements. The new Visual Analytics methods will allow processing of the massive and time-varying data so that the time-critical decisions can be made with minimum effort.