Leveraging 3D computer vision for camera-based precise geo-localisation. This project aims to develop advanced 3D computer vision and image processing technology that can turn regular cameras into high-precision location-sensing devices. Spatial Location is a fundamental type of information of our physical world. Determining the precise location of people, vehicle, and mobile devices is essential for many critical applications. Outcomes of the project will enable a wide range of novel applicatio ....Leveraging 3D computer vision for camera-based precise geo-localisation. This project aims to develop advanced 3D computer vision and image processing technology that can turn regular cameras into high-precision location-sensing devices. Spatial Location is a fundamental type of information of our physical world. Determining the precise location of people, vehicle, and mobile devices is essential for many critical applications. Outcomes of the project will enable a wide range of novel applications of significant social, environmental and economic value, such as Location-Aware Service, Environment Monitoring, Augmented Reality, Autonomous Vehicle, and Rapid Emergency Response. The project will enhance Australia's international competitive advantage in forefront of ICT research and technology innovation.Read moreRead less
Deep visual understanding: learning to see in an unruly world. Deep Learning has achieved incredible success at an astonishing variety of Computer Vision tasks recently. This project aims to convey this success into the challenging domain of high-level image-based reasoning. It will extend deep learning to achieve flexible semantic reasoning about the content of images based on information gleaned from the huge volumes of data available on the Internet. The project expects to overcome one of the ....Deep visual understanding: learning to see in an unruly world. Deep Learning has achieved incredible success at an astonishing variety of Computer Vision tasks recently. This project aims to convey this success into the challenging domain of high-level image-based reasoning. It will extend deep learning to achieve flexible semantic reasoning about the content of images based on information gleaned from the huge volumes of data available on the Internet. The project expects to overcome one of the primary limitations of deep learning and will greatly increase its practical application to a range of industrial, cultural or health settings.Read moreRead less
Discovery Early Career Researcher Award - Grant ID: DE190100539
Funder
Australian Research Council
Funding Amount
$408,000.00
Summary
Towards conversational vision-based Artificial Intelligence. This project aims to develop a novel learning framework, Vision-Ask-Answer-Act (V3A). This framework will allow a machine to perform a sequence of actions via a conversation with human users, based on intricate processing of not just visual input, but human-computer verbal exchanges. Artificial intelligence has great potential as a tool for economic productivity and daily tasks. Applications in cars and assistant robots, still in their ....Towards conversational vision-based Artificial Intelligence. This project aims to develop a novel learning framework, Vision-Ask-Answer-Act (V3A). This framework will allow a machine to perform a sequence of actions via a conversation with human users, based on intricate processing of not just visual input, but human-computer verbal exchanges. Artificial intelligence has great potential as a tool for economic productivity and daily tasks. Applications in cars and assistant robots, still in their early days, typically require significant expertise to use effectively. The outcomes of this project will push the boundary of vision-language research to produce a conversational intelligent agent that can be easily used in common situations across industry, transport, the medical sector, and at home.Read moreRead less
Developing key vision technology for automation of aquaculture factory. This project aims to investigate structural, coloured textural, and hyperspectral analysis approaches to achieve automated lobster molt-cycle staging and classification to the level required for commercial production. High labour cost, water contamination, and disease transmission are major barriers in Australian bay lobster aquaculture inhibiting its large scale production. Automation of the production process and reducing ....Developing key vision technology for automation of aquaculture factory. This project aims to investigate structural, coloured textural, and hyperspectral analysis approaches to achieve automated lobster molt-cycle staging and classification to the level required for commercial production. High labour cost, water contamination, and disease transmission are major barriers in Australian bay lobster aquaculture inhibiting its large scale production. Automation of the production process and reducing the human contact with animals are of high priority in the development of this Australian-led emerging industry. The project aims to develop technology to bring this world- first aquaculture factory to large scale production, and create new export opportunities for lobsters and production systems.Read moreRead less
Assistive micro-navigation for vision impaired people. This project aims to develop novel algorithms to transform a simple camera into a smart sensor, that can enable a vision-impaired person to navigate freely and without additional aids in a crowded area. Such a smart sensor will be endowed with the capability to detect and locate obstacles, identify the walking path, recognise objects and traffic signs and convey step-by-step instructions to the user. The project outcomes are expected to impr ....Assistive micro-navigation for vision impaired people. This project aims to develop novel algorithms to transform a simple camera into a smart sensor, that can enable a vision-impaired person to navigate freely and without additional aids in a crowded area. Such a smart sensor will be endowed with the capability to detect and locate obstacles, identify the walking path, recognise objects and traffic signs and convey step-by-step instructions to the user. The project outcomes are expected to improve the well-being and accessibility to public areas for vision-impaired people and reduce physical access disparities for this disadvantaged and vulnerable group. Furthermore, technologies developed in this project can potentially be adapted for use in related special navigation applications such as road safety, self-driving vehicles, and autonomous robots.Read moreRead less
Making Meta-learning Generalised . This project aims to develop novel machine learning techniques, termed generalised meta-learning, to make machines better utilise past experience to solve new tasks with few data. It expects to reduce the undesirable dependence of current machine learning on labelled data and significantly expand its application scope. Expected outcomes of the project consist of new theoretical results on meta-learning and a set of innovative algorithms that can support the bui ....Making Meta-learning Generalised . This project aims to develop novel machine learning techniques, termed generalised meta-learning, to make machines better utilise past experience to solve new tasks with few data. It expects to reduce the undesirable dependence of current machine learning on labelled data and significantly expand its application scope. Expected outcomes of the project consist of new theoretical results on meta-learning and a set of innovative algorithms that can support the building of next generation of computer vision systems to work in open and dynamic environments. This should be able to produce solid benefits to the science, society, and economy of Australian via the application of these advanced intelligent systems.Read moreRead less
Robust and Explainable 3D Computer Vision. Computer vision is increasingly relying on deep learning which is fragile, opaque and fails catastrophically without warning. This project aims to address these problems by developing new theory in graph representation of 3D geometric and image data, hierarchical graph simplification and novel modules designed specifically for deep learning over geometric graphs. Using these modules, it aims to design graph convolutional network architectures for self-s ....Robust and Explainable 3D Computer Vision. Computer vision is increasingly relying on deep learning which is fragile, opaque and fails catastrophically without warning. This project aims to address these problems by developing new theory in graph representation of 3D geometric and image data, hierarchical graph simplification and novel modules designed specifically for deep learning over geometric graphs. Using these modules, it aims to design graph convolutional network architectures for self-supervised learning that are robust to failures and provide explainable decisions for object detection and scene segmentation. The outcomes are expected to advance theory in robust deep learning and benefit 3D mapping, surveying, infrastructure monitoring, transport and robotics industries.Read moreRead less
Declarative Networks: Towards Robust and Explainable Deep Learning. The aim of this project is to develop declarative machine learning techniques that exploit inherent structure and models of the world. Deep learning has become the dominant approach for machine learning with many products and promises built on this technology. But deep learning is expensive, opaque, brittle and relies solely on human labelled data. This project intends to make deep learning more reliable by establishing theory a ....Declarative Networks: Towards Robust and Explainable Deep Learning. The aim of this project is to develop declarative machine learning techniques that exploit inherent structure and models of the world. Deep learning has become the dominant approach for machine learning with many products and promises built on this technology. But deep learning is expensive, opaque, brittle and relies solely on human labelled data. This project intends to make deep learning more reliable by establishing theory and algorithms that allow physical and mathematical models to be embedded within a deep learning framework, providing performance guarantees and interpretability. This would likely benefit machine learning based products that can understand the world and interact with humans naturally through vision and language.Read moreRead less
Automatic video annotation by learning from web data. This project aims to study next-generation video annotation technologies to automatically tag raw videos using a huge set of semantic concepts. The project will study new domain adaptation schemes and frameworks in order to substantially improve video annotation performance. The resulting prototype system can be directly used by ordinary users worldwide to search their personal videos using textual queries. The system is also applicable to vi ....Automatic video annotation by learning from web data. This project aims to study next-generation video annotation technologies to automatically tag raw videos using a huge set of semantic concepts. The project will study new domain adaptation schemes and frameworks in order to substantially improve video annotation performance. The resulting prototype system can be directly used by ordinary users worldwide to search their personal videos using textual queries. The system is also applicable to video surveillance applications, which can enhance Australia’s homeland security.Read moreRead less
Automatic Training Data Search and Model Evaluation by Measuring Domain Gap. We aim to investigate computer vision training data and test data, using automatically generated data sets for facial expression recognition and object re-identification. This project expects to quantify and understand the domain gap, the distribution difference between training and test data sets. Expected outcomes of this project are insights on measuring the domain gap, the ability to estimate model performance witho ....Automatic Training Data Search and Model Evaluation by Measuring Domain Gap. We aim to investigate computer vision training data and test data, using automatically generated data sets for facial expression recognition and object re-identification. This project expects to quantify and understand the domain gap, the distribution difference between training and test data sets. Expected outcomes of this project are insights on measuring the domain gap, the ability to estimate model performance without accessing expensive test labels and improvements to system generalisation. This should provide significant benefits for computer vision applications that currently require expensive labelling, and commercial and economic benefits across sectors such as transportation, security and manufacturing.Read moreRead less