ORCID Profile
0000-0002-1821-8644
Current Organisation
Deakin University
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
In Research Link Australia (RLA), "Research Topics" refer to ANZSRC FOR and SEO codes. These topics are either sourced from ANZSRC FOR and SEO codes listed in researchers' related grants or generated by a large language model (LLM) based on their publications.
Pattern Recognition and Data Mining | Artificial Intelligence and Image Processing | Database Management |
Information Processing Services (incl. Data Entry and Capture) | Electronic Information Storage and Retrieval Services
Publisher: Wiley
Date: 13-04-2020
DOI: 10.1002/CPE.5765
Abstract: With the development of internet technologies, social media and mobile devices, short texts have become an increasingly popular medium among users to communicate with friends, search information and review products. Measuring the similarity between short texts is a fundamental task due to its importance in many applications, such as text retrieval, topic discovery, and event detection. However, short texts generally comprise sparse, noisy, and ambiguous information. Hence, effectively measuring the distance between short texts is a challenging task. In this paper, we exploit the advantageous corpus‐wide word co‐occurrence information into document‐level feature enrichment to mitigate the challenges caused by the sparseness of short texts for distance measurement. We propose a novel context‐aware weighted Biterm method for short text Distance Measurement (BDM). In BDM, we extract biterms (ie, word pairs) from a short text corpus and exploit a biterm topic model to determine the global weights of biterms in the corpus. We then determine the local importance of a biterm in different contexts (ie, short texts) based on the corpus‐level biterm weight. The distance between two short texts is computed using the context‐aware weighted biterms. Experimental results on three real‐world datasets demonstrate better accuracy and effectiveness of the proposed BDM.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2023
Publisher: Wiley
Date: 28-04-2020
DOI: 10.1002/CPE.5797
Abstract: In case of sharp road illumination changes, bad weather such as rain, snow or fog, wear or missing of the lane marking, the reflective water stain on the road surface, the shadow obstruction of the tree, and mixed lane markings and other signs, missing detection or wrong detection will occur for the traditional lane marking detection algorithm. In this manuscript, a lane marking detection algorithm based on high‐precision map and multisensor fusion is proposed. The basic principle of the algorithm is to use the centimeter‐level high‐precision positioning combined with high‐precision map data to complete the detection of lane markings. In the process of generating high‐precision maps or in the uncovered areas of high‐precision maps, LIDAR (LIght Detection And Ranging) is used to estimate the curvature of the road to assist in lane marking detection. The experimental results show that the algorithm has lower false detection rate in case of bad road conditions, and the algorithm is robust.
Publisher: Wiley
Date: 07-04-2020
DOI: 10.1002/CPE.5764
Abstract: Human emotions can be recognized from facial expressions captured in videos. It is a growing research area in which many have attempted to improve video emotion detection in both lab‐controlled and unconstrained environments. While existing methods show a decent recognition accuracy on lab‐controlled datasets, they deliver much lower accuracy in a real‐world uncontrolled environment, where a variety of challenges need to be addressed such as variations in illumination, head pose, and in idual appearance. Moreover, automatically identifying the key frames consisting of the expression from real‐world videos is another challenge. In this article, to overcome these challenges, we provide a video emotion recognition via multiple feature fusion method. First, a uniform local binary pattern (LBP) and the scale‐invariant feature transform features are extracted from each frame in the video sequences. By applying a random forest classifier, all of the static frames are then labelled by the related emotion class. In this way, the key frames can be automatically identified, including neutral and other expressions. Furthermore, from the key frames, a new geometric feature vector and the LBP from three orthogonal planes are extracted. To further improve robustness, audio features are extracted from the video sequences as an additional dimension to augmenting visual facial expression analysis. The audio and visual features are fused through a kernel multimodal sparse representation. Finally, the corresponding emotion labels to the video sequences can be assigned when a multimodal quality measure specifies the quality of each modality and its role in the decision. The results on both acted facial expressions in the Wild and MMI datasets demonstrate that the proposed method outperforms several counterpart video emotion recognition methods.
Publisher: Wiley
Date: 14-04-2020
DOI: 10.1002/CPE.5751
Publisher: Wiley
Date: 16-08-2019
DOI: 10.1002/CPE.5484
Publisher: Springer International Publishing
Date: 2022
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2021
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2021
Publisher: Wiley
Date: 27-09-2021
DOI: 10.1002/CPE.6599
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 08-2019
Publisher: Public Library of Science (PLoS)
Date: 16-08-2023
DOI: 10.1371/JOURNAL.PONE.0290092
Abstract: Automatic detection of subsequence anomalies (i.e., an abnormal waveform denoted by a sequence of data points) in time series is critical in a wide variety of domains. However, most existing methods for subsequence anomaly detection often require knowing the length and the total number of anomalies in time series. Some methods fail to capture recurrent subsequence anomalies due to using only local or neighborhood information for anomaly detection. To address these limitations, in this paper, we propose a novel graph-represented time series (GraphTS) method for discovering subsequence anomalies. In GraphTS, we provide a new concept of time series graph representation model, which represents both recurrent and rare patterns in a time series. Particularly, in GraphTS, we develop a new 2D time series visualization (2Dviz) method, which compacts all 1D time series patterns into a 2D spatial temporal space. The 2Dviz method transfers time series patterns into a higher-resolution plot for easier sequence anomaly recognition (or detecting subsequence anomalies). Then, a Graph is constructed based on the 2D spatial temporal space of time series to capture recurrent and rare subsequence patterns effectively. The represented Graph also can be used to discover single and recurrent subsequence anomalies with arbitrary lengths. Experimental results demonstrate that the proposed method outperforms the state-of-the-art methods in terms of accuracy and efficiency.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2021
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2021
Publisher: Springer International Publishing
Date: 2022
Publisher: Public Library of Science (PLoS)
Date: 07-12-2022
DOI: 10.1371/JOURNAL.PONE.0278583
Abstract: Gene expression s le data, which usually contains massive expression profiles of genes, is commonly used for disease related gene analysis. The selection of relevant genes from huge amount of genes is always a fundamental process in applications of gene expression data. As more and more genes have been detected, the size of gene expression data becomes larger and larger this challenges the computing efficiency for extracting the relevant and important genes from gene expression data. In this paper, we provide a novel Bi-dimensional Principal Feature Selection (BPFS) method for efficiently extracting critical genes from big gene expression data. It applies the principal component analysis (PCA) method on s le and gene domains successively, aiming at extracting the relevant gene features and reducing redundancies while losing less information. The experimental results on four real-world cancer gene expression datasets show that the proposed BPFS method greatly reduces the data size and achieves a nearly double processing speed compared to the counterpart methods, while maintaining better accuracy and effectiveness.
Publisher: Hindawi Limited
Date: 02-03-2020
DOI: 10.1155/2020/4365191
Abstract: When multiple Wireless Body Area Networks (WBANs) are aggregated, the overlapping region of their communications will result in internetwork interference, which could impose severe impacts on the reliability of WBAN performance. Therefore, how to mitigate the internetwork interference becomes the key problem to be solved urgently in practical applications of WBAN. However, most of the current researches on internetwork interference focus on traditional cellular networks and large-scale wireless sensor networks. In this paper, an Optimal Backoff Time Interference Mitigation Algorithm (OBTIM) is proposed. This method performs rescheduling or channel switching when the performance of the WBANs falls below tolerance, utilizing the cell neighbour list established by the beacon method. Simulation results show that the proposed method improves the channel utilization and the network throughput, and in the meantime, reduces the collision probability and energy consumption, when compared with the contention-based beacon schedule scheme.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 03-2020
Start Date: 2014
End Date: 2016
Funder: Australian Research Council
View Funded ActivityStart Date: 2019
End Date: 2021
Funder: Australian Research Council
View Funded ActivityStart Date: 2014
End Date: 2017
Funder: Australian Research Council
View Funded ActivityStart Date: 2014
End Date: 08-2018
Amount: $329,027.00
Funder: Australian Research Council
View Funded ActivityStart Date: 2019
End Date: 12-2023
Amount: $351,725.00
Funder: Australian Research Council
View Funded ActivityStart Date: 05-2014
End Date: 09-2019
Amount: $349,179.00
Funder: Australian Research Council
View Funded Activity