ORCID Profile
0000-0002-8515-6324
Current Organisation
Queensland University of Technology
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
In Research Link Australia (RLA), "Research Topics" refer to ANZSRC FOR and SEO codes. These topics are either sourced from ANZSRC FOR and SEO codes listed in researchers' related grants or generated by a large language model (LLM) based on their publications.
Computer Vision | Artificial Intelligence and Image Processing | Pattern Recognition and Data Mining | Image Processing | Simulation and Modelling | Signal Processing | Ore Deposit Petrology | Design Innovation | Simulation And Modelling | Geochemistry | Statistics | Infrastructure Engineering and Asset Management | Civil Engineering | Exploration Geochemistry | Decision Support and Group Support Systems | Knowledge Representation and Machine Learning | Geochronology | Calculus of Variations, Systems Theory and Control Theory | Applied Statistics | Interdisciplinary Engineering Not Elsewhere Classified | Interdisciplinary Engineering | Biomechanics | Computer Vision
Expanding Knowledge in the Information and Computing Sciences | National Security | Information and Communication Services not elsewhere classified | Air Terminal Infrastructure and Management | Management and productivity issues not elsewhere classified | Computer Software and Services not elsewhere classified | Intelligence | Integrated systems | Air transport | Expanding Knowledge in Technology | Application Software Packages (excl. Computer Games) | Evaluation of Health Outcomes | Copper Ore Exploration | Expanding Knowledge in the Mathematical Sciences |
Publisher: IEEE
Date: 03-2017
DOI: 10.1109/WACV.2017.62
Publisher: IEEE
Date: 05-2010
Publisher: Elsevier BV
Date: 12-2015
Publisher: Association for Computing Machinery (ACM)
Date: 13-07-2023
DOI: 10.1145/3587931
Abstract: With advances in data-driven machine learning research, a wide variety of prediction models have been proposed to capture spatio-temporal features for the analysis of video streams. Recognising actions and detecting action transitions within an input video are challenging but necessary tasks for applications that require real-time human-machine interaction. By reviewing a large body of recent related work in the literature, we thoroughly analyse, explain, and compare action segmentation methods and provide details on the feature extraction and learning strategies that are used on most state-of-the-art methods. We cover the impact of the performance of object detection and tracking techniques on human action segmentation methodologies. We investigate the application of such models to real-world scenarios and discuss several limitations and key research directions towards improving interpretability, generalisation, optimisation, and deployment.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 02-2023
Publisher: IEEE
Date: 08-2011
Publisher: IEEE
Date: 11-2013
Publisher: IEEE
Date: 05-2010
Publisher: IEEE
Date: 11-2013
Publisher: SPIE
Date: 04-05-2004
DOI: 10.1117/12.536984
Publisher: Trans Tech Publications, Ltd.
Date: 06-2014
DOI: 10.4028/WWW.SCIENTIFIC.NET/AMM.568-570.1859
Abstract: Passenger experience has become a major factor that influences the success of an airport. In this context, passenger flow simulation has been used in designing and managing airports. However, most passenger flow simulations failed to consider the group dynamics when developing passenger flow models. In this paper, an agent-based model is presented to simulate passenger behaviour at the airport check-in and evacuation process. The simulation results show that the passenger behaviour can have significant influences on the performance and utilisation of services in airport terminals. The model was created using AnyLogic software and its parameters were initialised using recent research data published in the literature.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 06-2021
Publisher: Elsevier BV
Date: 09-2022
Publisher: Trans Tech Publications, Ltd.
Date: 03-2010
DOI: 10.4028/WWW.SCIENTIFIC.NET/AMR.97-101.2940
Abstract: This paper proposes the validity of a Gabor filter bank for feature extraction of solder joint images on Printed Circuit Boards (PCBs). A distance measure based on the Mahalanobis Cosine metric is also presented for classification of five different types of solder joints. From the experimental results, this methodology achieved high accuracy and a well generalised performance. This can be an effective method to reduce cost and improve quality in the production of PCBs in the manufacturing industry.
Publisher: IEEE
Date: 10-2011
Publisher: ISCA
Date: 08-09-2016
Publisher: IEEE
Date: 11-2016
Publisher: IEEE
Date: 12-2011
Publisher: Elsevier BV
Date: 08-2018
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 09-2019
Publisher: IEEE
Date: 12-2010
Publisher: Elsevier BV
Date: 07-2008
Publisher: Springer International Publishing
Date: 03-11-2016
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2022
Publisher: Elsevier BV
Date: 07-2023
Publisher: World Scientific Pub Co Pte Lt
Date: 11-2003
DOI: 10.1142/S0218001403002800
Abstract: Image registration plays a crucial role in the computer vision and medical imaging field where it is used to develop a spatial mapping between different sets of data. These transformations can range from simple rigid registrations to complex nonrigid deformations. Mutual information (MI) is a popular entropy-based similarity measure which has recently experienced a prolific expansion in a number of image registration applications. Stemming from information theory, this measure generally outperforms most other intensity-based measures in multimodal applications as it only assumes a statistical dependence between images. This paper provides a thorough introduction to the MI measure and its use in rigid medical image registration. A look at the extensions proposed to the original measure will also be provided. These were developed to improve the robustness of the measure and to avoid certain cases when maximizing MI does not lead to the correct spatial alignment.
Publisher: Hindawi Limited
Date: 12-12-2013
DOI: 10.1155/2013/261956
Abstract: Bundle adjustment is one of the essential components of the computer vision toolbox. This paper revisits the resection-intersection approach, which has previously been shown to have inferior convergence properties. Modifications are proposed that greatly improve the performance of this method, resulting in a fast and accurate approach. Firstly, a linear triangulation step is added to the intersection stage, yielding higher accuracy and improved convergence rate. Secondly, the effect of parameter updates is tracked in order to reduce wasteful computation only variables coupled to significantly changing variables are updated. This leads to significant improvements in computation time, at the cost of a small, controllable increase in error. Loop closures are handled effectively without the need for additional network modelling. The proposed approach is shown experimentally to yield comparable accuracy to a full sparse bundle adjustment (20% error increase) while computation time scales much better with the number of variables. Experiments on a progressive reconstruction system show the proposed method to be more efficient by a factor of 65 to 177, and 4.5 times more accurate (increasing over time) than a localised sparse bundle adjustment approach.
Publisher: Elsevier BV
Date: 12-2015
Publisher: IEEE
Date: 08-2014
Publisher: Elsevier BV
Date: 10-2011
Publisher: IEEE
Date: 11-2013
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 04-2020
Publisher: IEEE
Date: 03-2014
Publisher: IEEE
Date: 10-2017
Publisher: IEEE
Date: 03-2018
Publisher: Elsevier BV
Date: 06-2018
Publisher: Springer Berlin Heidelberg
Date: 2012
Publisher: Springer Berlin Heidelberg
Date: 2012
Publisher: ISCA
Date: 08-09-2016
Publisher: IEEE
Date: 12-2011
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 11-2019
Publisher: Cambridge University Press (CUP)
Date: 07-07-2011
DOI: 10.1017/S0033291711001073
Abstract: It is not known whether first-episode psychosis is characterized by the same prefrontal cortex functional imaging abnormalities as chronic schizophrenia. Thirty patients with a first episode of non-affective functional psychosis and 28 healthy controls underwent functional magnetic resonance imaging (fMRI) during performance of the n-back working memory task. Voxel-based analyses of brain activations and deactivations were carried out and compared between groups. The connectivity of regions of significant difference between the patients and controls was also examined. The first-episode patients did not show significant prefrontal hypo- or hyperactivation compared to controls. However, they showed failure of deactivation in the medial frontal cortex. This area showed high levels of connectivity with the posterior cingulate gyrus recuneus and parts of the parietal cortex bilaterally. Failure of deactivation was significantly greater in first-episode patients who had or went on to acquire a DSM-IV diagnosis of schizophrenia than in those who did not, and in those who met RDC criteria for schizophrenia compared to those who did not. First-episode psychosis is not characterized by hypo- or hyperfrontality but instead by a failure of deactivation in the medial frontal cortex. The location and connectivity of this area suggest that it is part of the default mode network. The failure of deactivation seems to be particularly marked in first-episode patients who have, or progress to, schizophrenia.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 12-2011
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2021
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2021
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 06-2020
Publisher: Elsevier BV
Date: 07-2014
Publisher: IEEE
Date: 11-2016
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 02-2015
Publisher: IEEE
Date: 04-2015
Publisher: Elsevier BV
Date: 10-2013
Publisher: IEEE
Date: 09-2009
DOI: 10.1109/AVSS.2009.32
Publisher: IGI Global
Date: 2010
DOI: 10.4018/978-1-60566-725-6.CH012
Abstract: This chapter describes the use of visual attention characteristics as a biometric for authentication or identification of in idual viewers. The visual attention characteristics of a person can be easily monitored by tracking the gaze of a viewer during the presentation of a known or unknown visual scene. The positions and sequences of gaze locations during viewing may be determined by overt (conscious) or covert (subconscious) viewing behaviour. Methods to quantify the spatial and temporal patterns established by the viewer for both overt and covert behaviours are proposed. The former behaviour entails a simple PIN-like approach to develop an independent signature while the latter behaviour is captured through three proposed techniques: a principal component analysis technique (‘eigenGaze’) a linear discriminant analysis technique and a fusion of distance measures. Experimental results suggest that both types of gaze behaviours can provide simple and effective biometrics for this application.
Publisher: IEEE
Date: 08-2015
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 04-2021
Publisher: IEEE
Date: 11-2016
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 02-2022
Publisher: IEEE
Date: 2004
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2023
Publisher: IEEE
Date: 2003
Publisher: Elsevier BV
Date: 07-2020
Publisher: Elsevier BV
Date: 07-2016
Publisher: IEEE
Date: 04-2013
Publisher: Elsevier BV
Date: 06-2017
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 10-2016
Publisher: IEEE
Date: 12-2011
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 05-2023
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 10-2023
Publisher: IEEE
Date: 06-2012
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 04-2019
Publisher: IEEE
Date: 08-2010
DOI: 10.1109/AVSS.2010.16
Publisher: IEEE
Date: 2008
Publisher: IEEE
Date: 05-2010
Publisher: IEEE
Date: 2009
Publisher: IEEE
Date: 11-2006
DOI: 10.1109/AVSS.2006.7
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 10-2018
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2020
Publisher: IEEE
Date: 2009
Publisher: ACM
Date: 17-10-2012
Publisher: IEEE
Date: 2008
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 06-2015
Publisher: IEEE
Date: 09-2015
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2023
Publisher: IEEE
Date: 09-2016
Publisher: Association for Computing Machinery (ACM)
Date: 23-05-2016
DOI: 10.1145/2906148
Abstract: With recent advances in consumer electronics and the increasingly urgent need for public security, camera networks have evolved from their early role of providing simple and static monitoring to current complex systems capable of obtaining extensive video information for intelligent processing, such as target localization, identification, and tracking. In all cases, it is of vital importance that the optimal camera configuration (i.e., optimal location, orientation, etc.) is determined before cameras are deployed as a suboptimal placement solution will adversely affect intelligent video surveillance and video analytic algorithms. The optimal configuration may also provide substantial savings on the total number of cameras required to achieve the same level of utility. In this article, we examine most, if not all, of the recent approaches (post 2000) addressing camera placement in a structured manner. We believe that our work can serve as a first point of entry for readers wishing to start researching into this area or engineers who need to design a camera system in practice. To this end, we attempt to provide a complete study of relevant formulation strategies and brief introductions to most commonly used optimization techniques by researchers in this field. We hope our work to be inspirational to spark new ideas in the field.
Publisher: SPIE
Date: 12-05-2004
DOI: 10.1117/12.536828
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2022
Publisher: IEEE
Date: 05-2010
Publisher: IEEE
Date: 06-2014
Publisher: IEEE
Date: 11-2006
DOI: 10.1109/AVSS.2006.60
Publisher: No publisher found
Date: 2011
Publisher: IEEE
Date: 09-2012
DOI: 10.1109/AVSS.2012.6
Publisher: IEEE
Date: 05-2010
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2021
Publisher: IEEE
Date: 03-2014
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2023
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2020
Publisher: IEEE
Date: 2009
Publisher: Elsevier BV
Date: 06-2015
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2018
Publisher: IEEE
Date: 03-2016
Publisher: IEEE
Date: 11-2010
Publisher: IEEE
Date: 08-2009
Publisher: IEEE
Date: 11-2016
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2021
Publisher: Trans Tech Publications, Ltd.
Date: 06-2014
DOI: 10.4028/WWW.SCIENTIFIC.NET/AMM.568-570.1893
Abstract: In this paper we implemented six different boarding strategies (Wilma, Steffen, Reverse Pyramid, Random, Blocks and By letter) in order to minimize boarding time and turnaround time for Boeing 777 and Airbus 380 aircrafts by using Agent-based modelling approach. In the simulation, we ided passengers into six different categories which are group size more than 5 people, passengers with child, gold members, first class passengers, business class passengers and economy class passengers. Results from the simulation demonstrates Reverse Pyramid method is the best boarding method for Boeing 777 and Steffen method is the best boarding method for Airbus 380.
Publisher: Wiley
Date: 09-10-2017
DOI: 10.1111/EPI.13907
Abstract: Epilepsy being one of the most prevalent neurological disorders, affecting approximately 50 million people worldwide, and with almost 30-40% of patients experiencing partial epilepsy being nonresponsive to medication, epilepsy surgery is widely accepted as an effective therapeutic option. Presurgical evaluation has advanced significantly using noninvasive techniques based on video monitoring, neuroimaging, and electrophysiological and neuropsychological tests however, certain clinical settings call for invasive intracranial recordings such as stereoelectroencephalography (SEEG), aiming to accurately map the eloquent brain networks involved during a seizure. Most of the current presurgical evaluation procedures focus on semiautomatic techniques, where surgery diagnosis relies immensely on neurologists' experience and their time-consuming subjective interpretation of semiology or the manifestations of epilepsy and their correlation with the brain's electrical activity. Because surgery misdiagnosis reaches a rate of 30%, and more than one-third of all epilepsies are poorly understood, there is an evident keen interest in improving diagnostic precision using computer-based methodologies that in the past few years have shown near-human performance. Among them, deep learning has excelled in many biological and medical applications, but has advanced insufficiently in epilepsy evaluation and automated understanding of neural bases of semiology. In this paper, we systematically review the automatic applications in epilepsy for human motion analysis, brain electrical activity, and the anatomoelectroclinical correlation to attribute anatomical localization of the epileptogenic network to distinctive epilepsy patterns. Notably, recent advances in deep learning techniques will be investigated in the contexts of epilepsy to address the challenges exhibited by traditional machine learning techniques. Finally, we discuss and propose future research on epilepsy surgery assessment that can jointly learn across visually observed semiologic patterns and recorded brain electrical activity.
Publisher: Elsevier BV
Date: 07-2010
Publisher: IEEE
Date: 03-2017
DOI: 10.1109/WACV.2017.27
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 06-2012
Publisher: IEEE
Date: 04-2015
Publisher: Elsevier BV
Date: 10-2018
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 04-2021
Publisher: IEEE
Date: 09-2015
Publisher: No publisher found
Date: 2011
Publisher: IEEE
Date: 2009
Publisher: Springer Science and Business Media LLC
Date: 28-09-2012
Publisher: Institution of Engineering and Technology (IET)
Date: 12-03-2018
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 07-2022
Publisher: IEEE
Date: 11-2010
Publisher: IEEE
Date: 11-2016
Publisher: IEEE
Date: 09-2009
DOI: 10.1109/AVSS.2009.8
Publisher: IEEE
Date: 12-2011
Publisher: IEEE
Date: 12-2008
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2022
Publisher: Elsevier BV
Date: 2015
Publisher: SAGE Publications
Date: 09-2013
Abstract: Collisions between pedestrians and vehicles continue to be a major problem throughout the world. Pedestrians trying to cross roads and railway tracks without any caution are often highly susceptible to collisions with vehicles and trains. Continuous financial, human and other losses have prompted transport related organizations to come up with various solutions addressing this issue. However, the quest for new and significant improvements in this area is still ongoing. This work addresses this issue by building a general framework using computer vision techniques to automatically monitor pedestrian movements in such high-risk areas to enable better analysis of activity, and the creation of future alerting strategies. As a result of rapid development in the electronics and semi-conductor industry there is extensive deployment of CCTV cameras in public places to capture video footage. This footage can then be used to analyse crowd activities in those particular places. This work seeks to identify the abnormal behaviour of in iduals in video footage. In this work we propose using a Semi-2D Hidden Markov Model (HMM), Full-2D HMM and Spatial HMM to model the normal activities of people. The outliers of the model (i.e. those observations with insufficient likelihood) are identified as abnormal activities. Location features, flow features and optical flow textures are used as the features for the model. The proposed approaches are evaluated using the publicly available UCSD datasets, and we demonstrate improved performance using a Semi-2D Hidden Markov Model compared to other state of the art methods. Further we illustrate how our proposed methods can be applied to detect anomalous events at rail level crossings.
Publisher: Institution of Engineering and Technology (IET)
Date: 2011
Publisher: IEEE
Date: 12-2012
Publisher: IEEE
Date: 12-2012
Publisher: Elsevier BV
Date: 2012
Publisher: IEEE
Date: 2004
Publisher: ACM
Date: 30-11-2011
Publisher: IEEE
Date: 03-2018
Publisher: IEEE
Date: 12-2008
Publisher: IEEE
Date: 12-2008
Publisher: ACM Press
Date: 2015
Publisher: IEEE
Date: 12-2008
Publisher: IEEE
Date: 03-2017
DOI: 10.1109/WACV.2017.14
Publisher: IEEE
Date: 10-2017
Publisher: IEEE
Date: 08-2010
DOI: 10.1109/AVSS.2010.30
Publisher: IEEE
Date: 08-2011
Publisher: Elsevier BV
Date: 03-2015
Publisher: IEEE
Date: 11-2006
Publisher: Springer International Publishing
Date: 2019
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2014
Publisher: IEEE
Date: 11-2014
Publisher: IEEE
Date: 11-2015
Publisher: Elsevier BV
Date: 12-2013
Publisher: Springer Nature Switzerland
Date: 2022
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 09-2021
Publisher: Springer Science and Business Media LLC
Date: 16-02-2017
Publisher: IEEE
Date: 04-2013
Publisher: IEEE
Date: 12-2012
Publisher: Elsevier BV
Date: 05-2018
DOI: 10.1016/J.YEBEH.2018.02.010
Abstract: Semiology observation and characterization play a major role in the presurgical evaluation of epilepsy. However, the interpretation of patient movements has subjective and intrinsic challenges. In this paper, we develop approaches to attempt to automatically extract and classify semiological patterns from facial expressions. We address limitations of existing computer-based analytical approaches of epilepsy monitoring, where facial movements have largely been ignored. This is an area that has seen limited advances in the literature. Inspired by recent advances in deep learning, we propose two deep learning models, landmark-based and region-based, to quantitatively identify changes in facial semiology in patients with mesial temporal lobe epilepsy (MTLE) from spontaneous expressions during phase I monitoring. A dataset has been collected from the Mater Advanced Epilepsy Unit (Brisbane, Australia) and is used to evaluate our proposed approach. Our experiments show that a landmark-based approach achieves promising results in analyzing facial semiology, where movements can be effectively marked and tracked when there is a frontal face on visualization. However, the region-based counterpart with spatiotemporal features achieves more accurate results when confronted with extreme head positions. A multifold cross-validation of the region-based approach exhibited an average test accuracy of 95.19% and an average AUC of 0.98 of the ROC curve. Conversely, a leave-one-subject-out cross-validation scheme for the same approach reveals a reduction in accuracy for the model as it is affected by data limitations and achieves an average test accuracy of 50.85%. Overall, the proposed deep learning models have shown promise in quantifying ictal facial movements in patients with MTLE. In turn, this may serve to enhance the automated presurgical epilepsy evaluation by allowing for standardization, mitigating bias, and assessing key features. The computer-aided diagnosis may help to support clinical decision-making and prevent erroneous localization and surgery.
Publisher: IEEE
Date: 12-2011
Publisher: IEEE
Date: 03-2017
Publisher: IEEE
Date: 09-2012
DOI: 10.1109/AVSS.2012.80
Publisher: IEEE
Date: 12-2011
Publisher: IEEE
Date: 06-2014
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 02-2022
Publisher: IEEE
Date: 12-2011
Publisher: IEEE
Date: 06-2013
Publisher: Elsevier BV
Date: 07-2014
Publisher: IEEE
Date: 11-2006
DOI: 10.1109/AVSS.2006.78
Publisher: IEEE
Date: 12-2012
Publisher: IEEE
Date: 03-2018
Publisher: IEEE
Date: 2005
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 06-2021
Publisher: IEEE
Date: 12-2012
Publisher: Elsevier BV
Date: 10-2015
Publisher: IEEE
Date: 12-2012
Publisher: IEEE
Date: 08-2014
Publisher: Oxford University Press (OUP)
Date: 15-03-2011
Publisher: IEEE
Date: 08-2014
Publisher: IEEE
Date: 07-2010
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2022
Publisher: ACM Press
Date: 2010
Publisher: IEEE
Date: 12-2012
Publisher: IGI Global
Date: 2012
DOI: 10.4018/978-1-4666-2660-7.CH011
Abstract: The time consuming and labour intensive task of identifying in iduals in surveillance video is often challenged by poor resolution and the sheer volume of stored video. Faces or identifying marks such as tattoos are often too coarse for direct matching by machine or human vision. Object tracking and super-resolution can then be combined to facilitate the automated detection and enhancement of areas of interest. The object tracking process enables the automatic detection of people of interest, greatly reducing the amount of data for super-resolution. Smaller regions such as faces can also be tracked. A number of instances of such regions can then be utilized to obtain a super-resolved version for matching. Performance improvement from super-resolution is demonstrated using a face verification task. It is shown that there is a consistent improvement of approximately 7% in verification accuracy, using both Eigenface and Elastic Bunch Graph Matching approaches for automatic face verification, starting from faces with an eye to eye distance of 14 pixels. Visual improvement in image fidelity from super-resolved images over low-resolution and interpolated images is demonstrated on a small database. Current research and future directions in this area are also summarized.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2021
Publisher: Springer Berlin Heidelberg
Date: 2012
Publisher: IEEE
Date: 12-2011
Publisher: Elsevier BV
Date: 12-2017
Publisher: IEEE
Date: 10-2017
Publisher: IEEE
Date: 12-2011
Start Date: 2014
End Date: 2016
Funder: Australian Research Council
View Funded ActivityStart Date: 2017
End Date: 2019
Funder: Australian Research Council
View Funded ActivityStart Date: 2011
End Date: 2011
Funder: Australian Research Council
View Funded ActivityStart Date: 2009
End Date: 2013
Funder: Australian Research Council
View Funded ActivityStart Date: 2014
End Date: 2017
Funder: Australian Research Council
View Funded ActivityStart Date: Start date not available
End Date: End date not available
Funder: Australian Research Council
View Funded ActivityStart Date: 2011
End Date: 12-2014
Amount: $255,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 2021
End Date: 12-2024
Amount: $440,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 2014
End Date: 12-2018
Amount: $270,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 06-2015
End Date: 06-2020
Amount: $420,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 09-2017
End Date: 12-2021
Amount: $410,500.00
Funder: Australian Research Council
View Funded ActivityStart Date: 2011
End Date: 12-2015
Amount: $500,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 06-2015
End Date: 09-2018
Amount: $660,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 10-2022
End Date: 10-2025
Amount: $797,827.00
Funder: Australian Research Council
View Funded ActivityStart Date: 07-2009
End Date: 12-2013
Amount: $2,400,000.00
Funder: Australian Research Council
View Funded Activity