Fault detection and identification in nonlinear complex systems. Complex systems usually comprise a large number of inter-dependent subsystems linked together to perform a certain task. Examples of such systems are power systems, irrigation systems, air traffic control systems, to name a few. Such systems are subject to component failure or malfunction. Total failure can cause an unacceptable financial losses and/or danger to personnel. It is therefore extremely essential, from economic and safe ....Fault detection and identification in nonlinear complex systems. Complex systems usually comprise a large number of inter-dependent subsystems linked together to perform a certain task. Examples of such systems are power systems, irrigation systems, air traffic control systems, to name a few. Such systems are subject to component failure or malfunction. Total failure can cause an unacceptable financial losses and/or danger to personnel. It is therefore extremely essential, from economic and safety view points, that a way be found to ensure reliable and viable operation of complex plants. A first step in achieving this goal is to detect faults on-line and in real-time when they occur and identify their location and characteristics, which is the aim of this project.Read moreRead less
Linkage Infrastructure, Equipment And Facilities - Grant ID: LE200100175
Funder
Australian Research Council
Funding Amount
$475,000.00
Summary
A high-payload, high-fidelity haptically-enabled motion simulation facility. An Australian-first motion simulation facility consisting of a high-payload, high-fidelity Stewart platform mounted on a dual-axis linear track is proposed. The facility will allow high acceleration and high vibration manoeuvres, and large displacements through an eight-degrees-of-freedom range of motion. It can carry the entire control compartment of a heavy vehicle, a truck, an ambulance, a train, or a multi-operator ....A high-payload, high-fidelity haptically-enabled motion simulation facility. An Australian-first motion simulation facility consisting of a high-payload, high-fidelity Stewart platform mounted on a dual-axis linear track is proposed. The facility will allow high acceleration and high vibration manoeuvres, and large displacements through an eight-degrees-of-freedom range of motion. It can carry the entire control compartment of a heavy vehicle, a truck, an ambulance, a train, or a multi-operator cockpit of a mining vehicle for simulation. The outcome will provide significant benefits for virtual vehicle prototyping and testing, driver training and behaviour modelling, motion perception and motion sickness research; therefore advancing Australia as the global leader in motion simulation and vehicular technologies.Read moreRead less
Linkage Infrastructure, Equipment And Facilities - Grant ID: LE150100079
Funder
Australian Research Council
Funding Amount
$320,000.00
Summary
A haptic-based immersive motion platform for human performance evaluation. A haptic-based immersive motion platform for human performance evaluation: This project aims to establish a motion platform capable of combining continuous centrifugal rotation and large linear displacement with an additional five degrees of motion. The system will house a human subject at the end of a large serial robot similar to a human arm, which can rotate continuously about its base. The robot arm will be installed ....A haptic-based immersive motion platform for human performance evaluation. A haptic-based immersive motion platform for human performance evaluation: This project aims to establish a motion platform capable of combining continuous centrifugal rotation and large linear displacement with an additional five degrees of motion. The system will house a human subject at the end of a large serial robot similar to a human arm, which can rotate continuously about its base. The robot arm will be installed on a large linear axis enabling the simulation of movements and accelerations along a straight path as well as rotation provided by other axes of the robot. The motion platform will comprise audio and visual devices, and haptic-based control mechanisms, for example a steering wheel and pedals or a helicopter cyclic, to provide a number of human immersed scenarios for driving/flying training and human perception evaluation.Read moreRead less
Statistical Methods of Model Fitting and Segmentation in Computer Vision. Electronic sensors such as cameras and lasers can provide a rich source of information about the position, shape, and motion of objects around us. However, to extract this information in a reliable, automatic, and accurate way requires a sophisticated statistical theory of the process. Example applications include: video surveillance (better automatic detection of moving people and vehicles and of characterising what those ....Statistical Methods of Model Fitting and Segmentation in Computer Vision. Electronic sensors such as cameras and lasers can provide a rich source of information about the position, shape, and motion of objects around us. However, to extract this information in a reliable, automatic, and accurate way requires a sophisticated statistical theory of the process. Example applications include: video surveillance (better automatic detection of moving people and vehicles and of characterising what those people and vehicles are doing), industrial prototyping and inspection (measuring the size and shape of objects), urban planning (laser scanning streetscapes to create computer models of cities), entertainment industry (movie special effects and games), etc. Read moreRead less
Learning to see in 3D. The project aims to endow machine vision with an ability we, as humans, use almost constantly: to judge 3D properties from a 2D image. This extremely useful ability will be applied to digital images to obtain 3D measurements and aid in automating tasks such as mining, surveying, medical diagnosis, and visual effects in movies.
Recognising and reconstructing objects in real time from a moving camera. This project will use a moving camera to estimate the three-dimensional shape and identity of objects and surfaces it can see. This ability, which we humans use all the time, has wide application in automation including driver assistance, exploring hazardous environments, robotics, remote collaboration, and the creation of three-dimensional models for entertainment.
Optimal Robust Fitting under the Framework of LP-Type Problems. The project aims to develop algorithms to support the development of robust and accurate computer vision systems. Real-world visual data (images, videos) is inherently noisy and outlier prone. To build computer vision systems that work reliably in the real world, it is necessary to ensure that the underlying algorithms are robust and efficient. The project aims to devise novel algorithms that can compute the best possible result giv ....Optimal Robust Fitting under the Framework of LP-Type Problems. The project aims to develop algorithms to support the development of robust and accurate computer vision systems. Real-world visual data (images, videos) is inherently noisy and outlier prone. To build computer vision systems that work reliably in the real world, it is necessary to ensure that the underlying algorithms are robust and efficient. The project aims to devise novel algorithms that can compute the best possible result given the input data in a short amount of time. The expected outcomes would support the construction of reliable and accurate computer vision-based systems, such as large-scale 3-D reconstruction from photo collections, self-driving cars and domestic robots.Read moreRead less
Australian Laureate Fellowships - Grant ID: FL130100102
Funder
Australian Research Council
Funding Amount
$3,179,946.00
Summary
Lifelong computer vision systems. This project will create a computer vision system that can produce a detailed environmental map in real time, turning standard video cameras into sensors that 'understand' a scene with basic semantic tools. This high-level sensing will unlock a wide range of applications for autonomous systems.
Whole image understanding by convolutions on graphs. This project seeks to develop technologies that will help computer vision interpret the whole visible scene, rather than just some of the objects therein. Existing automated methods for understanding images perform well at recognising specific objects in canonical poses, but the problem of whole image interpretation is far more challenging. Convolutional neural networks (CNN) have underpinned recent progress in object recognition, but whole-im ....Whole image understanding by convolutions on graphs. This project seeks to develop technologies that will help computer vision interpret the whole visible scene, rather than just some of the objects therein. Existing automated methods for understanding images perform well at recognising specific objects in canonical poses, but the problem of whole image interpretation is far more challenging. Convolutional neural networks (CNN) have underpinned recent progress in object recognition, but whole-image understanding cannot be tackled similarly because the number of possible combinations of objects is too large. The project thus proposes a graph-based generalisation of the CNN approach which allows scene structure to be learned explicitly. This would represent an important step towards providing computers with robust vision, allowing them to interact with their environment.Read moreRead less
Discovery Early Career Researcher Award - Grant ID: DE170101259
Funder
Australian Research Council
Funding Amount
$360,000.00
Summary
Zero-shot and few-shot learning with deep knowledge transfer. This project aims to develop few-shot and zero-shot learning, visual recognition techniques that can learn a visual concept with few or no visual examples. Visual recognition is a major component in Artificial Intelligence and used in cybernetic security, robotic vision and medical image analysis. This project will use deep learning to enable the zero/few-shot learning to use and model previously unexplored information, making zero/fe ....Zero-shot and few-shot learning with deep knowledge transfer. This project aims to develop few-shot and zero-shot learning, visual recognition techniques that can learn a visual concept with few or no visual examples. Visual recognition is a major component in Artificial Intelligence and used in cybernetic security, robotic vision and medical image analysis. This project will use deep learning to enable the zero/few-shot learning to use and model previously unexplored information, making zero/few-shot learning more practical, scalable and flexible. The project is expected to advance the applicability of visual recognition in many challenging scenarios and provide effective tools to analyse the online visual data for supporting Australia’s cybernetic security.Read moreRead less