ORCID Profile
0000-0003-0794-527X
Current Organisation
Griffith University
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
In Research Link Australia (RLA), "Research Topics" refer to ANZSRC FOR and SEO codes. These topics are either sourced from ANZSRC FOR and SEO codes listed in researchers' related grants or generated by a large language model (LLM) based on their publications.
Pattern Recognition and Data Mining | Artificial Intelligence and Image Processing
Expanding Knowledge in the Information and Computing Sciences | Information Processing Services (incl. Data Entry and Capture) | Expanding Knowledge in Technology |
Publisher: IEEE
Date: 11-2018
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 11-2023
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 05-2020
Publisher: ACM
Date: 06-11-2017
Publisher: Springer Science and Business Media LLC
Date: 19-03-2017
Publisher: Hindawi Limited
Date: 30-03-2020
DOI: 10.1155/2020/8975078
Abstract: The classification process of lung nodule detection in a traditional computer-aided detection (CAD) system is complex, and the classification result is heavily dependent on the performance of each step in lung nodule detection, causing low classification accuracy and high false positive rate. In order to alleviate these issues, a lung nodule classification method based on a deep residual network is proposed. Abandoning traditional image processing methods and taking the 50-layer ResNet network structure as the initial model, the deep residual network is constructed by combining residual learning and migration learning. The proposed approach is verified by conducting experiments on the lung computed tomography (CT) images from the publicly available LIDC-IDRI database. An average accuracy of 98.23% and a false positive rate of 1.65% are obtained based on the ten-fold cross-validation method. Compared with the conventional support vector machine (SVM)-based CAD system, the accuracy of our method improved by 9.96% and the false positive rate decreased by 6.95%, while the accuracy improved by 1.75% and 2.42%, respectively, and the false positive rate decreased by 2.07% and 2.22%, respectively, in contrast to the VGG19 model and InceptionV3 convolutional neural networks. The experimental results demonstrate the effectiveness of our proposed method in lung nodule classification for CT images.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 06-2022
Publisher: International Joint Conferences on Artificial Intelligence Organization
Date: 07-2018
Abstract: Most of current network representation models are learned in unsupervised fashions, which usually lack the capability of discrimination when applied to network analysis tasks, such as node classification. It is worth noting that label information is valuable for learning the discriminative network representations. However, labels of all training nodes are always difficult or expensive to obtain and manually labeling all nodes for training is inapplicable. Different sets of labeled nodes for model learning lead to different network representation results. In this paper, we propose a novel method, termed as ANRMAB, to learn the active discriminative network representations with a multi-armed bandit mechanism in active learning setting. Specifically, based on the networking data and the learned network representations, we design three active learning query strategies. By deriving an effective reward scheme that is closely related to the estimated performance measure of interest, ANRMAB uses a multi-armed bandit mechanism for adaptive decision making to select the most informative nodes for labeling. The updated labeled nodes are then used for further discriminative network representation learning. Experiments are conducted on three public data sets to verify the effectiveness of ANRMAB.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2023
Publisher: Association for Computing Machinery (ACM)
Date: 23-12-2023
DOI: 10.1145/3545176
Abstract: Long documents such as academic articles and business reports have been the standard format to detail out important issues and complicated subjects that require extra attention. An automatic summarization system that can effectively condense long documents into short and concise texts to encapsulate the most important information would thus be significant in aiding the reader’s comprehension. Recently, with the advent of neural architectures, significant research efforts have been made to advance automatic text summarization systems, and numerous studies on the challenges of extending these systems to the long document domain have emerged. In this survey, we provide a comprehensive overview of the research on long document summarization and a systematic evaluation across the three principal components of its research setting: benchmark datasets, summarization models, and evaluation metrics. For each component, we organize the literature within the context of long document summarization and conduct an empirical analysis to broaden the perspective on current research progress. The empirical analysis includes a study on the intrinsic characteristics of benchmark datasets, a multi-dimensional analysis of summarization models, and a review of the summarization evaluation metrics. Based on the overall findings, we conclude by proposing possible directions for future exploration in this rapidly growing field.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2021
Publisher: ACM
Date: 30-04-2023
Publisher: Elsevier BV
Date: 04-2020
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 12-2018
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 04-2017
Publisher: Springer Science and Business Media LLC
Date: 2018
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2021
Publisher: International Joint Conferences on Artificial Intelligence Organization
Date: 08-2019
Abstract: Spatial-temporal graph modeling is an important task to analyze the spatial relations and temporal trends of components in a system. Existing approaches mostly capture the spatial dependency on a fixed graph structure, assuming that the underlying relation between entities is pre-determined. However, the explicit graph structure (relation) does not necessarily reflect the true dependency and genuine relation may be missing due to the incomplete connections in the data. Furthermore, existing methods are ineffective to capture the temporal trends as the RNNs or CNNs employed in these methods cannot capture long-range temporal sequences. To overcome these limitations, we propose in this paper a novel graph neural network architecture, {Graph WaveNet}, for spatial-temporal graph modeling. By developing a novel adaptive dependency matrix and learn it through node embedding, our model can precisely capture the hidden spatial dependency in the data. With a stacked dilated 1D convolution component whose receptive field grows exponentially as the number of layers increases, Graph WaveNet is able to handle very long sequences. These two components are integrated seamlessly in a unified framework and the whole framework is learned in an end-to-end manner. Experimental results on two public traffic network datasets, METR-LA and PEMS-BAY, demonstrate the superior performance of our algorithm.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 10-2022
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 12-2022
Publisher: Cold Spring Harbor Laboratory
Date: 19-09-2023
Publisher: Springer Science and Business Media LLC
Date: 2015
Publisher: Hindawi Limited
Date: 09-09-2018
DOI: 10.1155/2018/6157249
Abstract: Early detection and treatment are regarded as the most effective ways to prevent suicidal ideation and potential suicide attempts—two critical risk factors resulting in successful suicides. Online communication channels are becoming a new way for people to express their suicidal tendencies. This paper presents an approach to understand suicidal ideation through online user-generated content with the goal of early detection via supervised learning. Analysing users’ language preferences and topic descriptions reveals rich knowledge that can be used as an early warning system for detecting suicidal tendencies. Suicidal in iduals express strong negative feelings, anxiety, and hopelessness. Suicidal thoughts may involve family and friends. And topics they discuss cover both personal and social issues. To detect suicidal ideation, we extract several informative sets of features, including statistical, syntactic, linguistic, word embedding, and topic features, and we compare six classifiers, including four traditional supervised classifiers and two neural network models. An experimental study demonstrates the feasibility and practicability of the approach and provides benchmarks for the suicidal ideation detection on the active online platforms: Reddit SuicideWatch and Twitter.
Publisher: Springer Science and Business Media LLC
Date: 06-08-2021
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2023
Publisher: ACM
Date: 27-02-2023
Publisher: IEEE
Date: 07-2018
Publisher: ACM
Date: 30-04-2023
Publisher: Elsevier BV
Date: 02-2017
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 05-2015
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2022
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2023
Publisher: IEEE
Date: 04-2013
Publisher: IEEE
Date: 07-2014
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 02-2023
Publisher: Elsevier BV
Date: 12-2023
Publisher: IEEE
Date: 06-2020
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 12-2019
Publisher: International Joint Conferences on Artificial Intelligence Organization
Date: 08-2019
Abstract: Graph clustering is a fundamental task which discovers communities or groups in networks. Recent studies have mostly focused on developing deep learning approaches to learn a compact graph embedding, upon which classic clustering methods like k-means or spectral clustering algorithms are applied. These two-step frameworks are difficult to manipulate and usually lead to suboptimal performance, mainly because the graph embedding is not goal-directed, i.e., designed for the specific clustering task. In this paper, we propose a goal-directed deep learning approach, Deep Attentional Embedded Graph Clustering (DAEGC for short). Our method focuses on attributed graphs to sufficiently explore the two sides of information in graphs. By employing an attention network to capture the importance of the neighboring nodes to a target node, our DAEGC algorithm encodes the topological structure and node content in a graph to a compact representation, on which an inner product decoder is trained to reconstruct the graph structure. Furthermore, soft labels from the graph embedding itself are generated to supervise a self-training graph clustering process, which iteratively refines the clustering results. The self-training process is jointly learned and optimized with the graph embedding in a unified framework, to mutually benefit both components. Experimental results compared with state-of-the-art algorithms demonstrate the superiority of our method.
Publisher: IEEE
Date: 07-2019
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 02-2022
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 02-2021
Publisher: Elsevier BV
Date: 05-2019
Publisher: Springer International Publishing
Date: 2019
Publisher: ACM
Date: 03-11-2019
Publisher: IEEE
Date: 10-2010
Publisher: Elsevier BV
Date: 11-2015
Publisher: IEEE
Date: 07-2019
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 12-2017
Publisher: IEEE
Date: 05-2017
Publisher: ACM
Date: 03-11-2014
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2023
Publisher: ACM
Date: 06-11-2017
Publisher: Elsevier BV
Date: 10-2022
Publisher: Elsevier BV
Date: 09-2022
Publisher: ACM
Date: 20-08-2020
Publisher: IEEE
Date: 12-2014
DOI: 10.1109/ICDM.2014.97
Publisher: IEEE
Date: 11-2019
Publisher: Springer International Publishing
Date: 2015
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 03-2017
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2023
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 11-2015
Publisher: Cold Spring Harbor Laboratory
Date: 02-07-2023
DOI: 10.1101/2023.06.29.547138
Abstract: Reconstructing neuron-level brain circuit network is a universally recognized formidable task. A significant impediment involves discerning the intricate interconnections among multitudinous neurons in a complex brain network. However, the majority of current methodologies only rely on learning local visual synapse features while neglecting the incorporation of comprehensive global topological connectivity information. In this paper, we consider the perspective of network connectivity and introduce graph neural networks to learn the topological features of brain networks. As a result, we propose Neuronal Circuit Prediction Network (NCPNet), a simple and effective model to jointly learn node structural representation and neighborhood representation, constructing neuronal connection pair feature for inferring neuron-level connections in a brain circuit network. We use a small number of connections randomly selected from a single brain circuit network as training data, expecting NCPNet to extrapolate known connections to unseen instances. We evaluated our model on Drosophila connectome and C. elegans worm connectome. The numerical results demonstrate that our model achieves a prediction accuracy of 91.88% for neuronal connections in the Drosophila connectome when utilizing only 5% of known connections. Similarly, under the condition of 5% known connections in C. elegans , our model achieves an accuracy of 93.79%. Additional qualitative analysis conducted on the learned representation vectors of Kenyon cells indicates that NCPNet successfully acquires meaningful features that enable the discrimination of neuronal sub-types. Our project is available at xz12119/NCPNet .
Publisher: ACM
Date: 20-08-2020
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2020
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 10-2021
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 03-2016
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 05-2021
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 04-2019
Publisher: Springer Science and Business Media LLC
Date: 23-12-2012
Publisher: IEEE
Date: 07-2019
Publisher: International Joint Conferences on Artificial Intelligence Organization
Date: 07-2018
Abstract: Network embedding aims to seek low-dimensional vector representations for network nodes, by preserving the network structure. The network embedding is typically represented in continuous vector, which imposes formidable challenges in storage and computation costs, particularly in large-scale applications. To address the issue, this paper proposes a novel discrete network embedding (DNE) for more compact representations. In particular, DNE learns short binary codes to represent each node. The Hamming similarity between two binary embeddings is then employed to well approximate the ground-truth similarity. A novel discrete multi-class classifier is also developed to expedite classification. Moreover, we propose to jointly learn the discrete embedding and classifier within a unified framework to improve the compactness and discrimination of network embedding. Extensive experiments on node classification consistently demonstrate that DNE exhibits lower storage and computational complexity than state-of-the-art network embedding methods, while obtains competitive classification results.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 09-2023
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 03-2015
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 06-2020
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2022
Publisher: Association for Computing Machinery (ACM)
Date: 23-03-2023
DOI: 10.1145/3570501
Abstract: Recommender systems which captures dynamic user interest based on time-ordered user-item interactions plays a critical role in the real-world. Although existing deep learning-based recommendation systems show good performances, these methods have two main drawbacks. Firstly, user interest is the consequence of the coaction of many factors. However, existing methods do not fully explore potential influence factors and ignore the user-item interaction formation process. The coarse-grained modeling patterns cannot accurately reflect complex user interest and leads to suboptimal recommendation results. Furthermore, these methods are implicit and largely operate in a black-box fashion. It is difficult to interpret their modeling processes and recommendation results. Secondly, recommendation datasets usually exhibit scale-free distributions and some existing recommender systems take advantage of hyperbolic space to match the data distribution. But they ignore that the operations in hyperbolic space are more complex than that in Euclidean space which further increases the difficulty of model interpretation. To tackle the above shortcomings, we propose an E xplainable H yperbolic T emporal P oint P rocess for User-Item Interaction Sequence Generation (EHTPP) . Specifically, EHTPP regards each user-item interaction as an event in hyperbolic space and employs a temporal point process framework to model the probability of event occurrence. Considering that the complexity of user interest and the interpretability of the model,EHTPP explores four potential influence factors related to user interest and uses them to explicitly guide the probability calculation in the temporal point process. In order to validate the effectiveness of EHTPP, we carry out a comprehensive evaluation of EHTPP on three datasets compared with a few competitive baselines. Experimental results demonstrate the state-of-the-art performances of EHTPP.
Publisher: Elsevier BV
Date: 08-2022
Publisher: ACM
Date: 18-07-2023
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2019
Publisher: Springer Science and Business Media LLC
Date: 16-10-2018
Publisher: Elsevier BV
Date: 03-2016
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2022
Publisher: IEEE
Date: 11-2019
Publisher: Hindawi Limited
Date: 14-09-2021
DOI: 10.1155/2021/6659695
Abstract: circRNA is a novel class of noncoding RNA with closed-loop structure. Increasing biological experiments have shown that circRNAs play an important role in many diseases by acting as a miRNA sponge to indirectly regulate the expression of miRNA target genes. Therefore, predicting associations between circRNAs and miRNAs can promote the understanding of pathogenesis of disease. In this paper, we propose a new computational method, NECMA, based on network embedding to predict potential associations between circRNAs and miRNAs. In our method, the Gaussian interaction profile (GIP) kernel similarities of circRNA and miRNA are calculated based on the known circRNA-miRNA associations, respectively. Then, the circRNA-miRNA association network, circRNA GIP kernel similarity network, and miRNA GIP kernel similarity network are utilized to construct the heterogeneous network. Furthermore, the network embedding algorithm is used to extract potential features of circRNA and miRNA from the heterogeneous network, respectively. Finally, the associations between circRNAs and miRNAs are predicted by using neighborhood regularization logic matrix decomposition and inner product. The performance of NECMA is evaluated by using ten-fold cross-validation. The results show that this method has better prediction accuracy than other state-of-the-art methods.
Publisher: ACM
Date: 26-10-2023
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2018
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2018
Publisher: ACM
Date: 27-02-2023
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 06-2022
Publisher: ACM
Date: 29-10-2012
Publisher: Springer Science and Business Media LLC
Date: 21-09-2016
Publisher: IEEE
Date: 07-2018
Publisher: Society for Industrial and Applied Mathematics
Date: 28-04-2014
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2023
Publisher: Elsevier BV
Date: 02-2015
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2020
Publisher: ACM
Date: 20-04-2020
Publisher: IEEE
Date: 07-2014
Publisher: Springer Berlin Heidelberg
Date: 2010
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2018
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 10-2021
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 05-2018
Publisher: ACM
Date: 08-2020
Publisher: Hindawi Limited
Date: 19-07-2018
DOI: 10.1155/2018/7861860
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2022
Publisher: IEEE
Date: 07-2019
Publisher: ACM
Date: 30-04-2023
Publisher: ACM
Date: 04-08-2023
Publisher: ACM
Date: 03-11-2019
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2022
Publisher: International Joint Conferences on Artificial Intelligence Organization
Date: 08-2019
Abstract: Attributed network embedding plays an important role in transferring network data into compact vectors for effective network analysis. Existing attributed network embedding models are designed either in continuous Euclidean spaces which introduce data redundancy or in binary coding spaces which incur significant loss of representation accuracy. To this end, we present a new Low-Bit Quantization for Attributed Network Representation Learning model (LQANR for short) that can learn compact node representations with low bitwidth values while preserving high representation accuracy. Specifically, we formulate a new representation learning function based on matrix factorization that can jointly learn the low-bit node representations and the layer aggregation weights under the low-bit quantization constraint. Because the new learning function falls into the category of mixed integer optimization, we propose an efficient mixed-integer based alternating direction method of multipliers (ADMM) algorithm as the solution. Experiments on real-world node classification and link prediction tasks validate the promising results of the proposed LQANR model.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2020
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2022
Publisher: Springer Science and Business Media LLC
Date: 02-06-2017
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 06-2018
Publisher: Springer Science and Business Media LLC
Date: 15-10-2019
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 09-2021
Publisher: IEEE
Date: 07-2016
Publisher: IEEE
Date: 07-2018
Publisher: IEEE
Date: 05-2016
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2023
Publisher: International Joint Conferences on Artificial Intelligence Organization
Date: 07-2018
Abstract: Graph embedding is an effective method to represent graph data in a low dimensional space for graph analytics. Most existing embedding algorithms typically focus on preserving the topological structure or minimizing the reconstruction errors of graph data, but they have mostly ignored the data distribution of the latent codes from the graphs, which often results in inferior embedding in real-world graph data. In this paper, we propose a novel adversarial graph embedding framework for graph data. The framework encodes the topological structure and node content in a graph to a compact representation, on which a decoder is trained to reconstruct the graph structure. Furthermore, the latent representation is enforced to match a prior distribution via an adversarial training scheme. To learn a robust embedding, two variants of adversarial approaches, adversarially regularized graph autoencoder (ARGA) and adversarially regularized variational graph autoencoder (ARVGA), are developed. Experimental studies on real-world graphs validate our design and demonstrate that our algorithms outperform baselines by a wide margin in link prediction, graph clustering, and graph visualization tasks.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 04-2021
Publisher: ACM
Date: 30-04-2023
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 10-2020
Publisher: ACM
Date: 27-02-2023
Publisher: Elsevier BV
Date: 2021
Start Date: 01-2022
End Date: 01-2026
Amount: $800,000.00
Funder: Australian Research Council
View Funded Activity