ORCID Profile
0000-0003-3631-256X
Current Organisation
University of Adelaide
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
In Research Link Australia (RLA), "Research Topics" refer to ANZSRC FOR and SEO codes. These topics are either sourced from ANZSRC FOR and SEO codes listed in researchers' related grants or generated by a large language model (LLM) based on their publications.
Computer Vision | Artificial Intelligence and Image Processing | Adaptive Agents and Intelligent Robotics |
Plant Production and Plant Primary Products not elsewhere classified | Computer Software and Services not elsewhere classified | Manufacturing not elsewhere classified | Information Processing Services (incl. Data Entry and Capture)
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 10-2018
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 03-2020
Publisher: Springer Nature Singapore
Date: 2022
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2022
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2023
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2023
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 08-2019
Publisher: Springer Nature Singapore
Date: 2022
Publisher: Springer Nature Singapore
Date: 2022
Publisher: Springer Nature Singapore
Date: 2022
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2020
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2023
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2022
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2022
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 06-2023
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2021
Publisher: Springer International Publishing
Date: 2018
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2023
Publisher: Association for Computing Machinery (ACM)
Date: 25-08-2023
DOI: 10.1145/3605781
Abstract: Visual Relational Reasoning is the basis of many vision-and-language based tasks (e.g., visual question answering and referring expression comprehension). In this article, we regard the complex referring expression comprehension (c-REF) task as the reasoning basis, in which c-REF seeks to localise a target object in an image guided by a complex query. Such queries often contain complex logic and thus impose two critical challenges for reasoning: (i) Comprehending the complex queries is difficult since these queries usually refer to multiple objects and their relationships (ii) Reasoning among multiple objects guided by the queries and then localising the target correctly are non-trivial. To address the above challenges, we propose a Transformer-based Relational Inference Network (Trans-RINet). Specifically, to comprehend the queries, we mimic the language-comprehending mechanism of humans, and devise a language decomposition module to decompose the queries into four types, i.e., basic attributes, absolute location, visual relationship and relative location. We further devise four modules to address the corresponding information. In each module, we consider the intra-(i.e., between the objects) and inter-modality relationships(i.e., between the queries and objects) to improve the reasoning ability. Moreover, we construct a relational graph to represent the objects and their relationships, and devise a multi-step reasoning method to progressively understand the complex logic. Since each type of the queries is closely related, we let each module interact with each other before making a decision. Extensive experiments on the CLEVR-Ref+, Ref-Reasoning, and CLEVR-CoGenT datasets demonstrate the superior reasoning performance of our Trans-RINet.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2023
Publisher: ACM
Date: 26-10-2023
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 12-2022
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2021
Publisher: Wiley
Date: 06-10-2017
DOI: 10.1002/BRB3.850
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2023
Publisher: Springer Nature Singapore
Date: 2022
Publisher: Springer Nature Singapore
Date: 2022
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 10-2018
Publisher: Springer Nature Singapore
Date: 2022
Publisher: Springer Nature Singapore
Date: 2022
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 06-2018
Publisher: Springer Nature Singapore
Date: 2022
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2022
Publisher: Elsevier BV
Date: 09-2019
Publisher: Springer Nature Singapore
Date: 2022
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 03-2022
Publisher: Springer Nature Singapore
Date: 2022
Publisher: Springer Nature Singapore
Date: 2022
Publisher: Springer Nature Singapore
Date: 2022
Publisher: Springer Nature Singapore
Date: 2022
Publisher: Springer Nature Singapore
Date: 2022
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 03-2011
Publisher: Springer Nature Singapore
Date: 2022
Publisher: Springer Nature Singapore
Date: 2022
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2023
Publisher: Elsevier BV
Date: 05-2019
DOI: 10.1016/J.MEDIA.2019.02.010
Abstract: The classification of medical images is an essential task in computer-aided diagnosis, medical image retrieval and mining. Although deep learning has shown proven advantages over traditional methods that rely on the handcrafted features, it remains challenging due to the significant intra-class variation and inter-class similarity caused by the ersity of imaging modalities and clinical pathologies. In this paper, we propose a synergic deep learning (SDL) model to address this issue by using multiple deep convolutional neural networks (DCNNs) simultaneously and enabling them to mutually learn from each other. Each pair of DCNNs has their learned image representation concatenated as the input of a synergic network, which has a fully connected structure that predicts whether the pair of input images belong to the same class. Thus, if one DCNN makes a correct classification, a mistake made by the other DCNN leads to a synergic error that serves as an extra force to update the model. This model can be trained end-to-end under the supervision of classification errors from DCNNs and synergic errors from each pair of DCNNs. Our experimental results on the ImageCLEF-2015, ImageCLEF-2016, ISIC-2016, and ISIC-2017 datasets indicate that the proposed SDL model achieves the state-of-the-art performance in these medical image classification tasks.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 12-2020
Location: United Kingdom of Great Britain and Northern Ireland
Start Date: 2019
End Date: 12-2023
Amount: $408,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 07-2014
End Date: 03-2021
Amount: $19,000,000.00
Funder: Australian Research Council
View Funded Activity