ORCID Profile
0000-0003-1557-4907
Current Organisation
Murdoch University
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
In Research Link Australia (RLA), "Research Topics" refer to ANZSRC FOR and SEO codes. These topics are either sourced from ANZSRC FOR and SEO codes listed in researchers' related grants or generated by a large language model (LLM) based on their publications.
Computer Vision | Artificial Intelligence and Image Processing | Pattern Recognition and Data Mining | Image Processing
Application Tools and System Utilities | Application Software Packages (excl. Computer Games) | Electronic Information Storage and Retrieval Services |
Publisher: Springer International Publishing
Date: 2016
Publisher: IEEE
Date: 10-2006
Publisher: Institution of Engineering and Technology
Date: 30-09-2017
DOI: 10.1049/PBSE003E_CH4
Publisher: IEEE
Date: 09-2012
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 06-2018
Publisher: Elsevier BV
Date: 11-2016
Publisher: IEEE
Date: 12-2019
Publisher: Elsevier BV
Date: 09-2022
Publisher: Elsevier BV
Date: 04-2021
Publisher: Springer International Publishing
Date: 2016
Publisher: Elsevier BV
Date: 07-2022
Publisher: IEEE
Date: 09-2015
Publisher: Springer Science and Business Media LLC
Date: 16-04-2015
Publisher: IEEE
Date: 06-2013
Publisher: IEEE
Date: 03-2016
Publisher: Elsevier BV
Date: 09-2021
Publisher: IEEE
Date: 2015
Publisher: IEEE
Date: 09-2014
Publisher: Elsevier BV
Date: 10-2023
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 02-2017
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 11-2014
Publisher: IEEE
Date: 06-2014
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 03-2016
Publisher: IEEE
Date: 11-2018
Publisher: IEEE
Date: 07-2017
Publisher: IEEE
Date: 03-2017
Publisher: University of Technology, Sydney
Date: 2018
DOI: 10.5130/ACIS2018.CC
Publisher: Elsevier BV
Date: 10-2020
Publisher: Elsevier BV
Date: 2015
Publisher: Springer International Publishing
Date: 2019
Publisher: IEEE
Date: 2013
Publisher: MDPI AG
Date: 13-01-2020
DOI: 10.3390/S20020447
Abstract: Across the globe, remote image data is rapidly being collected for the assessment of benthic communities from shallow to extremely deep waters on continental slopes to the abyssal seas. Exploiting this data is presently limited by the time it takes for experts to identify organisms found in these images. With this limitation in mind, a large effort has been made globally to introduce automation and machine learning algorithms to accelerate both classification and assessment of marine benthic biota. One major issue lies with organisms that move with swell and currents, such as kelps. This paper presents an automatic hierarchical classification method local binary classification as opposed to the conventional flat classification to classify kelps in images collected by autonomous underwater vehicles. The proposed kelp classification approach exploits learned feature representations extracted from deep residual networks. We show that these generic features outperform the traditional off-the-shelf CNN features and the conventional hand-crafted features. Experiments also demonstrate that the hierarchical classification method outperforms the traditional parallel multi-class classifications by a significant margin (90.0% vs. 57.6% and 77.2% vs. 59.0%) on Benthoz15 and Rottnest datasets respectively. Furthermore, we compare different hierarchical classification approaches and experimentally show that the sibling hierarchical training approach outperforms the inclusive hierarchical approach by a significant margin. We also report an application of our proposed method to study the change in kelp cover over time for annually repeated AUV surveys.
Publisher: Oxford University Press (OUP)
Date: 22-11-2019
Abstract: Underwater imaging is being extensively used for monitoring the abundance of lobster species and their bio ersity in their local habitats. However, manual assessment of these images requires a huge amount of human effort. In this article, we propose to automate the process of lobster detection using a deep learning technique. A major obstacle in deploying such an automatic framework for the localization of lobsters in erse environments is the lack of large annotated training datasets. Generating synthetic datasets to train these object detection models has become a popular approach. However, the current synthetic data generation frameworks rely on automatic segmentation of objects of interest, which becomes difficult when the objects have a complex shape, such as lobster. To overcome this limitation, we propose an approach to synthetically generate parts of the lobster. To handle the variability of real-world images, these parts were inserted into a set of erse background marine images to generate a large synthetic dataset. A state-of-the-art object detector was trained using this synthetic parts dataset and tested on the challenging task of Western rock lobster detection in West Australian seas. To the best of our knowledge, this is the first automatic lobster detection technique for partially visible and occluded lobsters.
Publisher: Elsevier BV
Date: 02-2022
Publisher: Elsevier
Date: 2017
Publisher: MDPI AG
Date: 24-09-2020
DOI: 10.3390/RS12193137
Abstract: In this paper, we propose a high performance Two-Stream spectral-spatial Residual Network (TSRN) for hyperspectral image classification. The first spectral residual network (sRN) stream is used to extract spectral characteristics, and the second spatial residual network (saRN) stream is concurrently used to extract spatial features. The sRN uses 1D convolutional layers to fit the spectral data structure, while the saRN uses 2D convolutional layers to match the hyperspectral spatial data structure. Furthermore, each convolutional layer is preceded by a Batch Normalization (BN) layer that works as a regularizer to speed up the training process and to improve the accuracy. We conducted experiments on three well-known hyperspectral datasets, and we compare our results with five contemporary methods across various sizes of training s les. The experimental results show that the proposed architecture can be trained with small size datasets and outperforms the state-of-the-art methods in terms of the Overall Accuracy, Average Accuracy, Kappa Value, and training time.
Publisher: Springer International Publishing
Date: 2019
Publisher: World Scientific Pub Co Pte Lt
Date: 18-07-2021
DOI: 10.1142/S021812662150016X
Abstract: Geometric analysis of three-dimensional (3D) surfaces with local deformations is a challenging task, required by mobile devices. In this paper, we propose a new local feature-based method derived from diffusion geometry, including a keypoint detector named persistence-based Heat Kernel Signature (pHKS), and a feature descriptor named Heat Propagation Strips (HeaPS). The pHKS detector first constructs a scalar field using the heat kernel signature function. The scalar field is generated at a small scale to capture fine geometric information of the local surface. Persistent homology is then computed to extract all the local maxima from the scalar field, and to provide a measure of persistence. Points with a high persistence are selected as pHKS keypoints. In order to describe a keypoint, an intrinsic support region is generated by the diffusion area. This support region is more robust than its geodesic distance counterpart, and provides a local surface with adaptive scale for subsequent feature description. The HeaPS descriptor is then developed by encoding the information contained in both the spatial and temporal domains of the heat kernel. We conducted several experiments to evaluate the effectiveness of the proposed method. On the TOSCA Dataset, the HeaPS descriptor achieved a high performance in terms of descriptiveness. The feature detector and descriptor were then tested on the SHREC 2010 Feature Detection and Description Dataset, and produced results that were better than the state-of-the-art methods. Finally, their application to shape retrieval was evaluated. The proposed pHKS detector and HeaPS descriptor achieved a notable improvement on the SHREC 2014 Human Dataset.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2019
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2020
Publisher: Elsevier BV
Date: 2022
DOI: 10.1016/J.OPTOM.2022.11.001
Abstract: Retinal and optic disc images are used to assess changes in the retinal vasculature. These can be changes associated with diseases such as diabetic retinopathy and glaucoma or induced using ophthalmodynamometry to measure arterial and venous pressure. Key steps toward automating the assessment of these changes are the segmentation and classification of the veins and arteries. However, such segmentation and classification are still required to be manually labelled by experts. Such automated labelling is challenging because of the complex morphology, anatomical variations, alterations due to disease and scarcity of labelled data for algorithm development. We present a deep machine learning solution called the multiscale guided attention network for retinal artery and vein segmentation and classification (MSGANet-RAV). MSGANet-RAV was developed and tested on 383 colour clinical optic disc images from LEI-CENTRAL, constructed in-house and 40 colour fundus images from the AV-DRIVE public dataset. The datasets have a mean optic disc occupancy per image of 60.6% and 2.18%, respectively. MSGANet-RAV is a U-shaped encoder-decoder network, where the encoder extracts multiscale features, and the decoder includes a sequence of self-attention modules. The self-attention modules explore, guide and incorporate vessel-specific structural and contextual feature information to segment and classify central optic disc and retinal vessel pixels. MSGANet-RAV achieved a pixel classification accuracy of 93.15%, sensitivity of 92.19%, and specificity of 94.13% on LEI-CENTRAL, outperforming several reference models. It similarly performed highly on AV-DRIVE with an accuracy, sensitivity and specificity of 95.48%, 93.59% and 97.27%, respectively. The results show the efficacy of MSGANet-RAV for identifying central optic disc and retinal arteries and veins. The method can be used in automated systems designed to assess vascular changes in retinal and optic disc images quantitatively.
Publisher: IEEE
Date: 07-2019
Publisher: IEEE
Date: 06-2011
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2020
Publisher: Elsevier BV
Date: 06-2017
Publisher: IEEE
Date: 09-2016
Publisher: Elsevier BV
Date: 11-2021
Publisher: Elsevier BV
Date: 02-2022
Publisher: IEEE
Date: 12-2015
Publisher: IEEE
Date: 06-2011
Publisher: Elsevier BV
Date: 06-2017
Publisher: IEEE
Date: 2006
Publisher: IEEE
Date: 04-2018
Publisher: IEEE
Date: 06-2015
Publisher: Elsevier BV
Date: 05-2021
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2014
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2021
Publisher: ACM
Date: 23-02-2019
Publisher: IEEE
Date: 04-2017
Publisher: Springer International Publishing
Date: 2014
Publisher: Elsevier BV
Date: 04-2023
Publisher: Springer International Publishing
Date: 2014
Publisher: Springer Science and Business Media LLC
Date: 04-09-2016
Publisher: Elsevier BV
Date: 10-2019
Publisher: Wiley
Date: 22-04-2020
DOI: 10.1002/CAE.22243
Publisher: Elsevier BV
Date: 2020
Publisher: Elsevier BV
Date: 09-2018
DOI: 10.1016/J.NEUNET.2018.06.005
Abstract: By introducing sign constraints on the weights, this paper proposes sign constrained rectifier networks (SCRNs), whose training can be solved efficiently by the well known majorization-minimization (MM) algorithms. We prove that the proposed two-hidden-layer SCRNs, which exhibit negative weights in the second hidden layer and negative weights in the output layer, are capable of separating any number of disjoint pattern sets. Furthermore, the proposed two-hidden-layer SCRNs can decompose the patterns of each class into several clusters so that each cluster is convexly separable from all the patterns from the other classes. This provides a means to learn the pattern structures and analyse the discriminant factors between different classes of patterns. Experimental results are provided to show the benefits of sign constraints in improving classification performance and the efficiency of the proposed MM algorithm.
Publisher: Springer International Publishing
Date: 2015
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 03-2015
Publisher: Elsevier BV
Date: 09-2018
Publisher: Public Library of Science (PLoS)
Date: 08-06-2023
DOI: 10.1371/JOURNAL.PONE.0286460
Abstract: Hajj, the Muslim pilgrimage, is a large mass gathering event that involves performing rituals at several sites on specific days and times in a fixed order, thereby requiring transport of pilgrims between sites. For the past two decades, Hajj transport has relied on conventional and shuttle buses, train services, and pilgrims walking along pedestrian routes that link these sites. To ensure smooth and efficient transport during Hajj, specific groups of pilgrims are allocated with the cooperation of Hajj authorities to specific time windows, modes, and routes. However, the large number of pilgrims, delays and changes in bus schedules/timetables, and occasional lack of coordination between transport modes have often caused congestion or delays in pilgrim transfer between sites, with a cascading effect on transport management. This study focuses on modelling and simulating the transport of pilgrims between the sites using a discrete event simulation tool called “ExtendSim”. Three transport modules were validated, and different scenarios were developed. These scenarios consider changes in the percentages of pilgrims allocated to each transport mode and the scheduling of various modes. The results can aid authorities to make informed decisions regarding transport strategies for managing the transport infrastructure and fleets. The proposed solutions could be implemented with judicious allocation of resources, through pre-event planning and real-time monitoring during the event.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 07-2018
Publisher: Springer International Publishing
Date: 2020
Publisher: IEEE
Date: 05-2019
Publisher: Elsevier BV
Date: 06-2022
Publisher: Elsevier BV
Date: 11
Publisher: Elsevier BV
Date: 02-2008
Publisher: IEEE
Date: 2013
Publisher: IEEE
Date: 02-2013
Publisher: Springer International Publishing
Date: 2019
Publisher: IEEE
Date: 09-2015
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2021
Publisher: Springer International Publishing
Date: 15-03-2019
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 02-2007
Publisher: CSIRO Publishing
Date: 11-04-2022
DOI: 10.1071/CP21626
Abstract: Context Most weed species can adversely impact agricultural productivity by competing for nutrients required by high-value crops. Manual weeding is not practical for large cropping areas. Many studies have been undertaken to develop automatic weed management systems for agricultural crops. In this process, one of the major tasks is to recognise the weeds from images. However, weed recognition is a challenging task. It is because weed and crop plants can be similar in colour, texture and shape which can be exacerbated further by the imaging conditions, geographic or weather conditions when the images are recorded. Advanced machine learning techniques can be used to recognise weeds from imagery. Aims In this paper, we have investigated five state-of-the-art deep neural networks, namely VGG16, ResNet-50, Inception-V3, Inception-ResNet-v2 and MobileNetV2, and evaluated their performance for weed recognition. Methods We have used several experimental settings and multiple dataset combinations. In particular, we constructed a large weed-crop dataset by combining several smaller datasets, mitigating class imbalance by data augmentation, and using this dataset in benchmarking the deep neural networks. We investigated the use of transfer learning techniques by preserving the pre-trained weights for extracting the features and fine-tuning them using the images of crop and weed datasets. Key results We found that VGG16 performed better than others on small-scale datasets, while ResNet-50 performed better than other deep networks on the large combined dataset. Conclusions This research shows that data augmentation and fine tuning techniques improve the performance of deep learning models for classifying crop and weed images. Implications This research evaluates the performance of several deep learning models and offers directions for using the most appropriate models as well as highlights the need for a large scale benchmark weed dataset.
Publisher: Springer International Publishing
Date: 2019
Publisher: Springer Berlin Heidelberg
Date: 2005
DOI: 10.1007/11590316_61
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2021
Publisher: Institution of Engineering and Technology (IET)
Date: 10-2017
Publisher: Springer International Publishing
Date: 2019
Publisher: IEEE
Date: 02-2013
Publisher: IEEE
Date: 05-2016
Publisher: IEEE
Date: 07-2020
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 06-2017
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2022
Publisher: IEEE
Date: 10-2008
Publisher: IEEE
Date: 07-2019
Publisher: Public Library of Science (PLoS)
Date: 26-06-2019
Publisher: Journal of Artificial Societies and Social Simulation
Date: 2019
DOI: 10.18564/JASSS.3997
Publisher: MDPI AG
Date: 23-12-2022
DOI: 10.3390/EN16010169
Abstract: The heterogeneous network (HetNet) is a specified cellular platform to tackle the rapidly growing anticipated data traffic. From a communications perspective, data loads can be mapped to energy loads that are generally placed on the operator networks. Meanwhile, renewable energy-aided networks offer to curtailed fossil fuel consumption, so to reduce the environmental pollution. This paper proposes a renewable energy based power supply architecture for the off-grid HetNet using a novel energy sharing model. Solar photovoltaics (PV) along with sufficient energy storage devices are used for each macro, micro, pico, or femto base station (BS). Additionally, a biomass generator (BG) is used for macro and micro BSs. The collocated macro and micro BSs are connected through end-to-end resistive lines. A novel-weighted proportional-fair resource-scheduling algorithm with sleep mechanisms is proposed for non-real time (NRT) applications by trading-off the power consumption and communication delays. Furthermore, the proposed algorithm with an extended discontinuous reception (eDRX) and power saving mode (PSM) for narrowband internet of things (IoT) applications extends the battery lifetime for IoT devices. HOMER optimization software is used to perform optimal system architecture, economic, and carbon footprint analyses while the Monte-Carlo simulation tool is used for evaluating the throughput and energy efficiency performances. The proposed algorithms are validated through the practical data of the rural areas of Bangladesh from which it is evident that the proposed power supply architecture is energy-efficient, cost-effective, reliable, and eco-friendly.
Publisher: IEEE
Date: 07-2017
Publisher: Association for Computing Machinery (ACM)
Date: 10-2011
Abstract: Geometric distortion measurement and the associated metrics involved are integral to the Rate Distortion (RD) shape coding framework, with importantly the efficacy of the metrics being strongly influenced by the underlying measurement strategy. This has been the catalyst for many different techniques with this article presenting a comprehensive review of geometric distortion measurement, the erse metrics applied, and their impact on shape coding. The respective performance of these measuring strategies is analyzed from both a RD and complexity perspective, with a recent distortion measurement technique based on arc-length-parameterization being comparatively evaluated. Some contemporary research challenges are also investigated, including schemes to effectively quantify shape deformation.
Publisher: Springer Science and Business Media LLC
Date: 21-03-2019
Publisher: Security Research Institute, Edith Cowan University
Date: 2018
Publisher: Springer Science and Business Media LLC
Date: 11-02-2022
DOI: 10.1007/S00521-022-06958-3
Abstract: Missing data is a major problem in real-world datasets, which hinders the performance of data analytics. Conventional data imputation schemes such as univariate single imputation replace missing values in each column with the same approximated value. These univariate single imputation techniques underestimate the variance of the imputed values. On the other hand, multivariate imputation explores the relationships between different columns of data, to impute the missing values. Reinforcement Learning (RL) is a machine learning paradigm where the agent learns by taking actions and receiving rewards in response, to achieve its goal. In this work, we propose an RL-based approach to impute missing data by learning a policy to impute data through an action-reward-based experience. Our approach imputes missing values in a column by working only on the same column (similar to univariate single imputation) but imputes the missing values in the column with different values thus keeping the variance in the imputed values. We report superior performance of our approach, compared with other imputation techniques, on a number of datasets.
Publisher: ACM
Date: 04-02-2020
Publisher: Springer International Publishing
Date: 2019
Publisher: CSIRO Publishing
Date: 07-06-2022
DOI: 10.1071/CP21710
Abstract: Context Insects are a major threat to crop production. They can infect, damage, and reduce agricultural yields. Accurate and fast detection of insects will help insect control. From a computer algorithm point of view, insect detection from imagery is a tiny object detection problem. Handling detection of tiny objects in large datasets is challenging due to small resolution of the insects in an image, and other nuisances such as occlusion, noise, and lack of features. Aims Our aim was to achieve a high-performance agricultural insect detector using an enhanced artificial intelligence machine learning technique. Methods We used a YOLOv3 network-based framework, which is a high performing and computationally fast object detector. We further improved the original feature pyramidal network of YOLOv3 by integrating an adaptive feature fusion module. For training the network, we first applied data augmentation techniques to regularise the dataset. Then, we trained the network using the adaptive features and optimised the hyper-parameters. Finally, we tested the proposed network on a subset dataset of the multi-class insect pest dataset Pest24, which contains 25 878 images. Key results We achieved an accuracy of 72.10%, which is superior to existing techniques, while achieving a fast detection rate of 63.8 images per second. Conclusions We compared the results with several object detection models regarding detection accuracy and processing speed. The proposed method achieved superior performance both in terms of accuracy and computational speed. Implications The proposed method demonstrates that machine learning networks can provide a foundation for developing real-time systems that can help better pest control to reduce crop damage.
Publisher: IEEE
Date: 07-2017
Publisher: Springer International Publishing
Date: 2015
Publisher: Elsevier BV
Date: 06-2019
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2020
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 09-2018
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2021
Publisher: IGI Global
Date: 2007
DOI: 10.4018/978-1-59140-766-9.CH007
Abstract: With the significant influence and increasing requirements of visual mobile communications in our everyday lives, low bit-rate video coding to handle the stringent bandwidth limitations of mobile networks has become a major research topic. With both processing power and battery resources being inherently constrained, and signals having to be transmitted over error-prone mobile channels, this has mandated the design requirement for coders to be both low complexity and robust error resilient. To support multilevel users, any encoded bit-stream should also be both scalable and embedded. This chapter presents a review of appropriate image and video coding techniques for mobile communication applications and aims to provide an appreciation of the rich and far-reaching advancements taking place in this exciting field, while concomitantly outlining both the physical significance of popular quality image and video coding metrics and some of the research challenges that remain to be resolved.
Publisher: Ovid Technologies (Wolters Kluwer Health)
Date: 03-2018
Publisher: Elsevier BV
Date: 11-2022
DOI: 10.1016/J.COMPBIOMED.2022.106126
Abstract: Appropriate anticoagulant therapy for patients with atrial fibrillation (AF) requires assessment of stroke and bleeding risks. However, risk stratification schemas such as CHA This was a retrospective cohort study of 9670 patients, mean age 76.9 years, 46% women, who were hospitalized with non-valvular AF, and had 1-year follow-up. The outcomes were ischemic stroke (167), major bleeding (430) admissions, all-cause death (1912) and event-free survival (7387). Discrimination and calibration of ML models were compared with clinical risk scores by area under the curve (AUC). Risk stratification was assessed using net reclassification index (NRI). Multilabel gradient boosting classifier chain provided the best AUCs for stroke (0.685 95% CI 0.676, 0.694), major bleeding (0.709 95% CI 0.703, 0.716) and death (0.765 95% CI 0.763, 0.768) compared to multi-layer neural networks and classifier chain using support vector machine. It provided modest performance improvement for stroke compared to AUC of CHA Multilabel ML models can outperform clinical risk stratification scores for predicting the risk of major bleeding and death in non-valvular AF patients.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 07-2016
Publisher: Association for Computing Machinery (ACM)
Date: 04-02-2019
DOI: 10.1145/3295748
Abstract: Generating a description of an image is called image captioning. Image captioning requires recognizing the important objects, their attributes, and their relationships in an image. It also needs to generate syntactically and semantically correct sentences. Deep-learning-based techniques are capable of handling the complexities and challenges of image captioning. In this survey article, we aim to present a comprehensive review of existing deep-learning-based image captioning techniques. We discuss the foundation of the techniques to analyze their performances, strengths, and limitations. We also discuss the datasets and the evaluation metrics popularly used in deep-learning-based automatic image captioning.
Publisher: Springer Science and Business Media LLC
Date: 24-04-2013
Publisher: Elsevier BV
Date: 2006
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 08-2023
Publisher: IEEE
Date: 2007
DOI: 10.1109/ICIS.2007.69
Publisher: Elsevier BV
Date: 11-2021
Publisher: Institution of Engineering and Technology (IET)
Date: 2010
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 06-2012
Publisher: Elsevier BV
Date: 09-2023
Publisher: Elsevier BV
Date: 2021
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2020
Publisher: IEEE
Date: 07-2017
Publisher: IEEE
Date: 07-2018
Publisher: Elsevier BV
Date: 02-2015
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 08-2014
Publisher: Springer International Publishing
Date: 2019
Publisher: Wiley
Date: 27-02-2019
DOI: 10.1002/EHF2.12419
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2018
Publisher: Elsevier BV
Date: 12-2021
Publisher: Elsevier
Date: 2018
Publisher: MDPI AG
Date: 30-08-2022
DOI: 10.3390/RS14174288
Abstract: Biotic and abiotic plant stress (e.g., frost, fungi, diseases) can significantly impact crop production. It is thus essential to detect such stress at an early stage before visual symptoms and damage become apparent. To this end, this paper proposes a novel deep learning method, called Spectral Convolution and Channel Attention Network (SC-CAN), which exploits the difference in spectral responses of healthy and stressed crops. The proposed SC-CAN method comprises two main modules: (i) a spectral convolution module, which consists of dilated causal convolutional layers stacked in a residual manner to capture the spectral features (ii) a channel attention module, which consists of a global pooling layer and fully connected layers that compute inter-relationship between feature map channels before scaling them based on their importance level (attention score). Unlike standard convolution, which focuses on learning local features, the dilated convolution layers can learn both local and global features. These layers also have long receptive fields, making them suitable for capturing long dependency patterns in hyperspectral data. However, because not all feature maps produced by the dilated convolutional layers are important, we propose a channel attention module that weights the feature maps according to their importance level. We used SC-CAN to classify salt stress (i.e., abiotic stress) on four datasets (Chinese Spring (CS), Aegilops columnaris (co(CS)), Ae. speltoides auchery (sp(CS)), and Kharchia datasets) and Fusarium head blight disease (i.e., biotic stress) on Fusarium dataset. Reported experimental results show that the proposed method outperforms existing state-of-the-art techniques with an overall accuracy of 83.08%, 88.90%, 82.44%, 82.10%, and 82.78% on CS, co(CS), sp(CS), Kharchia, and Fusarium datasets, respectively.
Publisher: IEEE
Date: 04-2018
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 08-2018
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 06-2017
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 10-2007
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 03-2020
Publisher: IEEE
Date: 09-2018
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 11-2018
Publisher: Springer International Publishing
Date: 2019
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2020
Publisher: IEEE
Date: 12-2019
Publisher: IEEE
Date: 09-2016
Publisher: Springer International Publishing
Date: 2019
Publisher: IEEE
Date: 10-2021
Publisher: IEEE
Date: 11-2010
Publisher: IEEE
Date: 23-05-2022
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 06-2021
Publisher: Springer Science and Business Media LLC
Date: 15-09-2021
DOI: 10.1038/S41598-021-97643-3
Abstract: Our aim was to investigate the usefulness of machine learning approaches on linked administrative health data at the population level in predicting older patients’ one-year risk of acute coronary syndrome and death following the use of non-steroidal anti-inflammatory drugs (NSAIDs). Patients from a Western Australian cardiovascular population who were supplied with NSAIDs between 1 Jan 2003 and 31 Dec 2004 were identified from Pharmaceutical Benefits Scheme data. Comorbidities from linked hospital admissions data and medication history were inputs. Admissions for acute coronary syndrome or death within one year from the first supply date were outputs. Machine learning classification methods were used to build models to predict ACS and death. Model performance was measured by the area under the receiver operating characteristic curve (AUC-ROC), sensitivity and specificity. There were 68,889 patients in the NSAIDs cohort with mean age 76 years and 54% were female. 1882 patients were admitted for acute coronary syndrome and 5405 patients died within one year after their first supply of NSAIDs. The multi-layer neural network, gradient boosting machine and support vector machine were applied to build various classification models. The gradient boosting machine achieved the best performance with an average AUC-ROC of 0.72 predicting ACS and 0.84 predicting death. Machine learning models applied to linked administrative data can potentially improve adverse outcome risk prediction. Further investigation of additional data and approaches are required to improve the performance for adverse outcome risk prediction.
Publisher: IEEE
Date: 06-2013
Publisher: IEEE
Date: 12-2008
Publisher: IEEE
Date: 12-2016
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 09-2021
Publisher: Elsevier BV
Date: 10-2008
Publisher: Springer International Publishing
Date: 2016
Publisher: Elsevier BV
Date: 11-2017
Publisher: IEEE
Date: 09-2015
Publisher: Springer International Publishing
Date: 2019
Publisher: ISCA
Date: 20-08-2017
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 06-2019
Publisher: Springer Science and Business Media LLC
Date: 03-07-2015
Publisher: IEEE
Date: 12-2018
Publisher: IEEE
Date: 11-2005
Publisher: Springer International Publishing
Date: 2017
Publisher: Elsevier BV
Date: 05-2023
Publisher: IEEE
Date: 18-07-2021
Publisher: Elsevier BV
Date: 10-2021
Publisher: Institution of Engineering and Technology (IET)
Date: 15-06-2023
DOI: 10.1049/WSS2.12063
Abstract: Low‐power localisation systems are crucial for machine‐to‐machine communication technologies. This article investigates LoRa technology for localisation using multiple features of the received signal, such as Received Signal Strength Indicator (RSSI), Spreading Factors (SF), and Signal to Noise Ratio (SNR). A novel range‐based technique to estimate the distance of a target node from a LoRa gateway using machine‐learning models that incorporates SF, SNR, and RSSI to train the models is proposed. A modified trilateration approach is then used to localise the target node from three gateways. Our experiment used three LoRaWAN gateways and two sensor nodes, on a sports oval with an approximate area coverage of 30,000 square metres. The authors also used a public LoRaWAN dataset to build a model test the proposed method and compare both range‐based distance mapping with trilateration and fingerprint‐based direct location estimation techniques. Our method achieved an average distance error of 43.97 m on our experimental dataset. The results show that the combination of RSSI, SNR, and SF‐based distance mapping provides ∼10% improvement on ranging accuracy and 26.58% higher accuracy for trilateration‐based localisation when compared with just using RSSI. Our method also achieved 50% superior localisation accuracy with fingerprint‐based direct location estimation approaches.
Publisher: IEEE
Date: 06-2013
Publisher: IEEE
Date: 2005
Publisher: Springer International Publishing
Date: 2017
Publisher: IEEE
Date: 05-2015
Publisher: MDPI AG
Date: 04-08-2022
DOI: 10.3390/AGRICULTURE12081160
Abstract: Eggplant is a popular vegetable crop. Eggplant yields can be affected by various diseases. Automatic detection and recognition of diseases is an important step toward improving crop yields. In this paper, we used a two-stream deep fusion architecture, employing CNN-SVM and CNN-Softmax pipelines, along with an inference model to infer the disease classes. A dataset of 2284 images was sourced from primary (using a consumer RGB camera) and secondary sources (the internet). The dataset contained images of nine eggplant diseases. Experimental results show that the proposed method achieved better accuracy and lower false-positive results compared to other deep learning methods (such as VGG16, Inception V3, VGG 19, MobileNet, NasNetMobile, and ResNet50).
Location: Bangladesh
Start Date: 12-2022
End Date: 12-2025
Amount: $495,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 04-2012
End Date: 12-2018
Amount: $375,000.00
Funder: Australian Research Council
View Funded Activity