Deep visual understanding: learning to see in an unruly world. Deep Learning has achieved incredible success at an astonishing variety of Computer Vision tasks recently. This project aims to convey this success into the challenging domain of high-level image-based reasoning. It will extend deep learning to achieve flexible semantic reasoning about the content of images based on information gleaned from the huge volumes of data available on the Internet. The project expects to overcome one of the ....Deep visual understanding: learning to see in an unruly world. Deep Learning has achieved incredible success at an astonishing variety of Computer Vision tasks recently. This project aims to convey this success into the challenging domain of high-level image-based reasoning. It will extend deep learning to achieve flexible semantic reasoning about the content of images based on information gleaned from the huge volumes of data available on the Internet. The project expects to overcome one of the primary limitations of deep learning and will greatly increase its practical application to a range of industrial, cultural or health settings.Read moreRead less
Added depth: automated high level image interpretation. Humans are very good at understanding the world through imagery, but computers lack this fundamental capacity because they lack experience of what they might see. This project will provide this experience by combining the large volumes of imagery on the Internet with three dimensional information generated by humans for other purposes.
Discovery Early Career Researcher Award - Grant ID: DE190100539
Funder
Australian Research Council
Funding Amount
$408,000.00
Summary
Towards conversational vision-based Artificial Intelligence. This project aims to develop a novel learning framework, Vision-Ask-Answer-Act (V3A). This framework will allow a machine to perform a sequence of actions via a conversation with human users, based on intricate processing of not just visual input, but human-computer verbal exchanges. Artificial intelligence has great potential as a tool for economic productivity and daily tasks. Applications in cars and assistant robots, still in their ....Towards conversational vision-based Artificial Intelligence. This project aims to develop a novel learning framework, Vision-Ask-Answer-Act (V3A). This framework will allow a machine to perform a sequence of actions via a conversation with human users, based on intricate processing of not just visual input, but human-computer verbal exchanges. Artificial intelligence has great potential as a tool for economic productivity and daily tasks. Applications in cars and assistant robots, still in their early days, typically require significant expertise to use effectively. The outcomes of this project will push the boundary of vision-language research to produce a conversational intelligent agent that can be easily used in common situations across industry, transport, the medical sector, and at home.Read moreRead less
Making Meta-learning Generalised . This project aims to develop novel machine learning techniques, termed generalised meta-learning, to make machines better utilise past experience to solve new tasks with few data. It expects to reduce the undesirable dependence of current machine learning on labelled data and significantly expand its application scope. Expected outcomes of the project consist of new theoretical results on meta-learning and a set of innovative algorithms that can support the bui ....Making Meta-learning Generalised . This project aims to develop novel machine learning techniques, termed generalised meta-learning, to make machines better utilise past experience to solve new tasks with few data. It expects to reduce the undesirable dependence of current machine learning on labelled data and significantly expand its application scope. Expected outcomes of the project consist of new theoretical results on meta-learning and a set of innovative algorithms that can support the building of next generation of computer vision systems to work in open and dynamic environments. This should be able to produce solid benefits to the science, society, and economy of Australian via the application of these advanced intelligent systems.Read moreRead less
Robust and Explainable 3D Computer Vision. Computer vision is increasingly relying on deep learning which is fragile, opaque and fails catastrophically without warning. This project aims to address these problems by developing new theory in graph representation of 3D geometric and image data, hierarchical graph simplification and novel modules designed specifically for deep learning over geometric graphs. Using these modules, it aims to design graph convolutional network architectures for self-s ....Robust and Explainable 3D Computer Vision. Computer vision is increasingly relying on deep learning which is fragile, opaque and fails catastrophically without warning. This project aims to address these problems by developing new theory in graph representation of 3D geometric and image data, hierarchical graph simplification and novel modules designed specifically for deep learning over geometric graphs. Using these modules, it aims to design graph convolutional network architectures for self-supervised learning that are robust to failures and provide explainable decisions for object detection and scene segmentation. The outcomes are expected to advance theory in robust deep learning and benefit 3D mapping, surveying, infrastructure monitoring, transport and robotics industries.Read moreRead less
Visual tracking with environmental constraints. By incorporating high level scene understanding into visual tracking, this project will improve the capacity to monitor and analyse complex patterns of activity in video. This has many applications in public safety and security, but the project will demonstrate it on the challenging task of tracking players during an Australian Football League (AFL) game to gather statistics on their performance.
Leveraging 3D computer vision for camera-based precise geo-localisation. This project aims to develop advanced 3D computer vision and image processing technology that can turn regular cameras into high-precision location-sensing devices. Spatial Location is a fundamental type of information of our physical world. Determining the precise location of people, vehicle, and mobile devices is essential for many critical applications. Outcomes of the project will enable a wide range of novel applicatio ....Leveraging 3D computer vision for camera-based precise geo-localisation. This project aims to develop advanced 3D computer vision and image processing technology that can turn regular cameras into high-precision location-sensing devices. Spatial Location is a fundamental type of information of our physical world. Determining the precise location of people, vehicle, and mobile devices is essential for many critical applications. Outcomes of the project will enable a wide range of novel applications of significant social, environmental and economic value, such as Location-Aware Service, Environment Monitoring, Augmented Reality, Autonomous Vehicle, and Rapid Emergency Response. The project will enhance Australia's international competitive advantage in forefront of ICT research and technology innovation.Read moreRead less
Automatic video annotation by learning from web data. This project aims to study next-generation video annotation technologies to automatically tag raw videos using a huge set of semantic concepts. The project will study new domain adaptation schemes and frameworks in order to substantially improve video annotation performance. The resulting prototype system can be directly used by ordinary users worldwide to search their personal videos using textual queries. The system is also applicable to vi ....Automatic video annotation by learning from web data. This project aims to study next-generation video annotation technologies to automatically tag raw videos using a huge set of semantic concepts. The project will study new domain adaptation schemes and frameworks in order to substantially improve video annotation performance. The resulting prototype system can be directly used by ordinary users worldwide to search their personal videos using textual queries. The system is also applicable to video surveillance applications, which can enhance Australia’s homeland security.Read moreRead less
Automatic Training Data Search and Model Evaluation by Measuring Domain Gap. We aim to investigate computer vision training data and test data, using automatically generated data sets for facial expression recognition and object re-identification. This project expects to quantify and understand the domain gap, the distribution difference between training and test data sets. Expected outcomes of this project are insights on measuring the domain gap, the ability to estimate model performance witho ....Automatic Training Data Search and Model Evaluation by Measuring Domain Gap. We aim to investigate computer vision training data and test data, using automatically generated data sets for facial expression recognition and object re-identification. This project expects to quantify and understand the domain gap, the distribution difference between training and test data sets. Expected outcomes of this project are insights on measuring the domain gap, the ability to estimate model performance without accessing expensive test labels and improvements to system generalisation. This should provide significant benefits for computer vision applications that currently require expensive labelling, and commercial and economic benefits across sectors such as transportation, security and manufacturing.Read moreRead less
Australian Laureate Fellowships - Grant ID: FL170100117
Funder
Australian Research Council
Funding Amount
$3,208,192.00
Summary
On snapping up semantics of dynamic pixels from moving cameras. The project aims to develop a suite of original models and algorithms for processing and understanding videos captured by moving cameras, and to establish the mathematical foundations for deep learning-based computer vision to provide theoretical underpinnings. The project expects to generate new knowledge that will transform moving-camera computer vision with step-changes in visual quality enhancement, compression and acceleration ....On snapping up semantics of dynamic pixels from moving cameras. The project aims to develop a suite of original models and algorithms for processing and understanding videos captured by moving cameras, and to establish the mathematical foundations for deep learning-based computer vision to provide theoretical underpinnings. The project expects to generate new knowledge that will transform moving-camera computer vision with step-changes in visual quality enhancement, compression and acceleration technologies, and solutions for fundamental computer vision tasks. A new concept of feature complexity for measuring the discriminant and learnable abilities of features from deep models will also be defined. The outcomes of the project will be critical for enabling autonomous machines to perceive and interact with the environment.Read moreRead less