ORCID Profile
0000-0003-2142-0154
Current Organisation
University of Adelaide
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 06-2023
Publisher: MDPI AG
Date: 22-10-2021
DOI: 10.3390/ANI11113033
Abstract: The growing world population has increased the demand for animal-sourced protein. However, animal farming productivity is faced with challenges from traditional farming practices, socioeconomic status, and climate change. In recent years, smart sensors, big data, and deep learning have been applied to animal welfare measurement and livestock farming applications, including behaviour recognition and health monitoring. In order to facilitate research in this area, this review summarises and analyses some main techniques used in smart livestock farming, focusing on those related to cattle lameness detection and behaviour recognition. In this study, more than 100 relevant papers on cattle lameness detection and behaviour recognition have been evaluated and discussed. Based on a review and a comparison of recent technologies and methods, we anticipate that intelligent perception for cattle behaviour and welfare monitoring will develop towards standardisation, a larger scale, and intelligence, combined with Internet of things (IoT) and deep learning technologies. In addition, the key challenges and opportunities of future research are also highlighted and discussed.
Publisher: Elsevier BV
Date: 2023
Publisher: Elsevier BV
Date: 02-2022
Publisher: MDPI AG
Date: 30-08-2022
DOI: 10.3390/S22176541
Abstract: Pork accounts for an important proportion of livestock products. For pig farming, a lot of manpower, material resources and time are required to monitor pig health and welfare. As the number of pigs in farming increases, the continued use of traditional monitoring methods may cause stress and harm to pigs and farmers and affect pig health and welfare as well as farming economic output. In addition, the application of artificial intelligence has become a core part of smart pig farming. The precision pig farming system uses sensors such as cameras and radio frequency identification to monitor biometric information such as pig sound and pig behavior in real-time and convert them into key indicators of pig health and welfare. By analyzing the key indicators, problems in pig health and welfare can be detected early, and timely intervention and treatment can be provided, which helps to improve the production and economic efficiency of pig farming. This paper studies more than 150 papers on precision pig farming and summarizes and evaluates the application of artificial intelligence technologies to pig detection, tracking, behavior recognition and sound recognition. Finally, we summarize and discuss the opportunities and challenges of precision pig farming.
Publisher: IEEE
Date: 08-2020
Publisher: Hindawi Limited
Date: 2017
DOI: 10.1155/2017/2157243
Abstract: Visual localization is widely used in the autonomous navigation system and Advanced Driver Assistance Systems (ADAS). This paper presents a visual localization method based on multifeature fusion and disparity information using stereo images. We integrate disparity information into complete center-symmetric local binary patterns (CSLBP) to obtain a robust global image description (D-CSLBP). In order to represent the scene in depth, multifeature fusion of D-CSLBP and HOG features provides valuable information and permits decreasing the effect of some typical problems in place recognition such as perceptual aliasing. It improves visual recognition performance by taking advantage of depth, texture, and shape information. In addition, for real-time visual localization, local sensitive hashing method (LSH) was used to compress the high-dimensional multifeature into binary vectors. It can thus speed up the process of image matching. To show its effectiveness, the proposed method is tested and evaluated using real datasets acquired in outdoor environments. Given the obtained results, our approach allows more effective visual localization compared with the state-of-the-art method FAB-MAP.
Publisher: Wiley
Date: 28-03-2022
DOI: 10.1111/ADJ.12909
Abstract: The purpose of this study was to compare the marginal gaps of sequentially milled lithium disilicate (LDS) crowns using two different milling units. One lower left first molar typodont tooth prepared for an LDS crown by an undergraduate student in a simulation clinic was selected. The crown preparation was scanned by a TRIOS 3 scanner and twelve LDS crowns milled by an E4D (E4DM) and a Sirona inLab MC X5 (MCX5) milling unit using identical settings. The crowns were seated onto the original crown preparation and three vertical marginal gap measurements were taken at four locations (mid‐buccal, mid‐lingual, mid‐mesial and mid‐distal) using a stereomicroscope. The mean marginal gap (MMG) was calculated for each in idual tooth surface and each crown. The MMG for the E4DM (100.40 μm) was not significantly different to the MCX5 (101.08 μm) milling unit ( P = 0.8809). In both units, there was a statistically significant trend of increasing MMG with sequentially milled crowns using the same burs (E4DM P = 0.0133 MCX5 P = 0.0240). The E4DM and MCX5 milling units produced LDS crowns with similar MMG’s and within a clinically acceptable range but with a trend of increasing MMG when analysed sequentially. © 2022 Australian Dental Association
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2023
Publisher: MDPI AG
Date: 23-02-2022
DOI: 10.3390/ANI12050558
Abstract: Computer vision-based technologies play a key role in precision livestock farming, and video-based analysis approaches have been advocated as useful tools for automatic animal monitoring, behavior analysis, and efficient welfare measurement management. Accurately and efficiently segmenting animals’ contours from their backgrounds is a prerequisite for vision-based technologies. Deep learning-based segmentation methods have shown good performance through training models on a large amount of pixel-labeled images. However, it is challenging and time-consuming to label animal images due to their irregular contours and changing postures. In order to reduce the reliance on the number of labeled images, one-shot learning with a pseudo-labeling approach is proposed using only one labeled image frame to segment animals in videos. The proposed approach is mainly comprised of an Xception-based Fully Convolutional Neural Network (Xception-FCN) module and a pseudo-labeling (PL) module. Xception-FCN utilizes depth-wise separable convolutions to learn different-level visual features and localize dense prediction based on the one single labeled frame. Then, PL leverages the segmentation results of the Xception-FCN model to fine-tune the model, leading to performance boosts in cattle video segmentation. Systematic experiments were conducted on a challenging feedlot cattle video dataset acquired by the authors, and the proposed approach achieved a mean intersection-over-union score of 88.7% and a contour accuracy of 80.8%, outperforming state-of-the-art methods (OSVOS and OSMN). Our proposed one-shot learning approach could serve as an enabling component for livestock farming-related segmentation and detection applications.
Publisher: Springer International Publishing
Date: 2016
Publisher: MDPI AG
Date: 31-07-2023
DOI: 10.3390/ANI13152472
Abstract: This paper proposes a method for automatic pig detection and segmentation using RGB-D data for precision livestock farming. The proposed method combines the enhanced YOLOv5s model with the Res2Net bottleneck structure, resulting in improved fine-grained feature extraction and ultimately enhancing the precision of pig detection and segmentation in 2D images. Additionally, the method facilitates the acquisition of 3D point cloud data of pigs in a simpler and more efficient way by using the pig mask obtained in 2D detection and segmentation and combining it with depth information. To evaluate the effectiveness of the proposed method, two datasets were constructed. The first dataset consists of 5400 images captured in various pig pens under erse lighting conditions, while the second dataset was obtained from the UK. The experimental results demonstrated that the improved YOLOv5s_Res2Net achieved a mAP@0.5:0.95 of 89.6% and 84.8% for both pig detection and segmentation tasks on our dataset, while achieving a mAP@0.5:0.95 of 93.4% and 89.4% on the Edinburgh pig behaviour dataset. This approach provides valuable insights for improving pig management, conducting welfare assessments, and estimating weight accurately.
Publisher: Elsevier BV
Date: 11-2022
Publisher: MDPI AG
Date: 28-05-2019
DOI: 10.3390/S19112439
Abstract: Convolutional Network (ConvNet), with its strong image representation ability, has achieved significant progress in the computer vision and robotic fields. In this paper, we propose a visual localization approach based on place recognition that combines the powerful ConvNet features and localized image sequence matching. The image distance matrix is constructed based on the cosine distance of extracted ConvNet features, and then a sequence search technique is applied on this distance matrix for the final visual recognition. To speed up the computational efficiency, the locality sensitive hashing (LSH) method is applied to achieve real-time performances with minimal accuracy degradation. We present extensive experiments on four real world data sets to evaluate each of the specific challenges in visual recognition. A comprehensive performance comparison of different ConvNet layers (each defining a level of features) considering both appearance and illumination changes is conducted. Compared with the traditional approaches based on hand-crafted features and single image matching, the proposed method shows good performances even in the presence of appearance and illumination changes.
Publisher: Elsevier BV
Date: 10-2019
Publisher: Springer International Publishing
Date: 2017
Publisher: Frontiers Media SA
Date: 14-03-2023
DOI: 10.3389/FPLS.2023.1065209
Abstract: The frame of corn harvester is prone to vibration bending and torsional deformation due to the vibration caused by field road bumps and fluctuations. It poses a serious challenge to the reliability of machinery. Therefore it is critical to explore the vibration mechanism, and to identify the vibration states under different working conditions. To address the above problem, a vibration state identification method is proposed in this paper. An improved empirical mode decomposition (EMD) algorithm was used to decrease noise for signals of high noise and non-stationary vibration in the field. The support vector machine (SVM) model was used for identification of frame vibration states under different working conditions. The results showed that: (1) an improved EMD algorithm could effectively reduce noise interference and restore the effective information of the original signal. (2) based on improved EMD – SVM method identify the vibration states of the frame with the accuracy of 99.21%. (3) The corn ears in grain tank were not sensitive to low order vibration, but had an absorption effect on high order vibration. The proposed method has the potential to be applied for accurately identifying vibration state and improving frame safety.
Publisher: Frontiers Media SA
Date: 23-11-2022
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 06-2022
Publisher: Springer International Publishing
Date: 2014
Publisher: Springer Science and Business Media LLC
Date: 24-03-2022
Publisher: Frontiers Media SA
Date: 05-06-2023
Publisher: MDPI AG
Date: 04-08-2023
DOI: 10.3390/ANI13152521
Abstract: Obtaining animal regions and the relative position relationship of animals in the scene is conducive to further studying animal habits, which is of great significance for smart animal farming. However, the complex breeding environment still makes detection difficult. To address the problems of poor target segmentation effects and the weak generalization ability of existing semantic segmentation models in complex scenes, a semantic segmentation model based on an improved DeepLabV3+ network (Imp-DeepLabV3+) was proposed. Firstly, the backbone network of the DeepLabV3+ model was replaced by MobileNetV2 to enhance the feature extraction capability of the model. Then, the layer-by-layer feature fusion method was adopted in the Decoder stage to integrate high-level semantic feature information with low-level high-resolution feature information at multi-scale to achieve more precise up-s ling operation. Finally, the SENet module was further introduced into the network to enhance information interaction after feature fusion and improve the segmentation precision of the model under complex datasets. The experimental results demonstrate that the Imp-DeepLabV3+ model achieved a high pixel accuracy (PA) of 99.4%, a mean pixel accuracy (MPA) of 98.1%, and a mean intersection over union (MIoU) of 96.8%. Compared to the original DeepLabV3+ model, the segmentation performance of the improved model significantly improved. Moreover, the overall segmentation performance of the Imp-DeepLabV3+ model surpassed that of other commonly used semantic segmentation models, such as Fully Convolutional Networks (FCNs), Lite Reduced Atrous Spatial Pyramid Pooling (LR-ASPP), and U-Net. Therefore, this study can be applied to the field of scene segmentation and is conducive to further analyzing in idual information and promoting the development of intelligent animal farming.
Publisher: Elsevier BV
Date: 10-2020
Publisher: Elsevier BV
Date: 07-2020
Publisher: Springer International Publishing
Date: 2015
Publisher: American Society of Agricultural and Biological Engineers (ASABE)
Date: 2021
DOI: 10.13031/TRANS.14658
Abstract: Highlights BiGRU-attention based cow behavior classification was proposed. Key spatial-temporal features were captured for behavior representation. BiGRU-attention achieved & % classification accuracy on calf and adult cow datasets. The proposed method could be used for similar animal behavior classification. Abstract . Animal behavior consists of time series activities, which can reflect animals’ health and welfare status. Monitoring and classifying animal behavior facilitates management decisions to optimize animal performance, welfare, and environmental outcomes. In recent years, deep learning methods have been applied to monitor animal behavior worldwide. To achieve high behavior classification accuracy, a BiGRU-attention based method is proposed in this article to classify some common behaviors, such as exploring, feeding, grooming, standing, and walking. In our work, (1) Inception-V3 was first applied to extract convolutional neural network (CNN) features for each image frame in videos, (2) bidirectional gated recurrent unit (BiGRU) was used to further extract spatial-temporal features, (3) an attention mechanism was deployed to allocate weights to each of the extracted spatial-temporal features according to feature similarity, and (4) the weighted spatial-temporal features were fed to a Softmax layer for behavior classification. Experiments were conducted on two datasets (i.e., calf and adult cow), and the proposed method achieved 82.35% and 82.26% classification accuracy on the calf and adult cow datasets, respectively. In addition, in comparison with other methods, the proposed BiGRU-attention method outperformed long short-term memory (LSTM), bidirectional LSTM (BiLSTM), and BiGRU. Overall, the proposed BiGRU-attention method can capture key spatial-temporal features to significantly improve animal behavior classification, which is favorable for automatic behavior classification in precision livestock farming. Keywords: BiGRU, Cow behavior, Deep learning, LSTM, Precision livestock farming.
Publisher: Frontiers Media SA
Date: 21-12-2022
DOI: 10.3389/FPLS.2022.1056842
Abstract: Maize is susceptible to infect pest disease, and early disease detection is key to preventing the reduction of maize yields. The raw data used for plant disease detection are commonly RGB images and hyperspectral images (HSI). RGB images can be acquired rapidly and low-costly, but the detection accuracy is not satisfactory. On the contrary, using HSIs tends to obtain higher detection accuracy, but HSIs are difficult and high-cost to obtain in field. To overcome this contradiction, we have proposed the maize spectral recovery disease detection framework which includes two parts: the maize spectral recovery network based on the advanced hyperspectral recovery convolutional neural network (HSCNN+) and the maize disease detection network based on the convolutional neural network (CNN). Taking raw RGB data as input of the framework, the output reconstructed HSIs are used as input of disease detection network to achieve disease detection task. As a result, the detection accuracy obtained by using the low-cost raw RGB data almost as same as that obtained by using HSIs directly. The HSCNN+ is found to be fit to our spectral recovery model and the reconstruction fidelity was satisfactory. Experimental results demonstrate that the reconstructed HSIs efficiently improve detection accuracy compared with raw RGB image in tested scenarios, especially in complex environment scenario, for which the detection accuracy increases by 6.14%. The proposed framework has the advantages of fast, low cost and high detection precision. Moreover, the framework offers the possibility of real-time and precise field disease detection and can be applied in agricultural robots.
Publisher: Elsevier BV
Date: 06-2021
Publisher: American Society of Agricultural and Biological Engineers
Date: 2022
Publisher: MDPI AG
Date: 25-10-2017
DOI: 10.3390/S17112442
Publisher: IEEE
Date: 08-2020
Publisher: Elsevier BV
Date: 11-2023
Publisher: Frontiers Media SA
Date: 30-09-2022
DOI: 10.3389/FPLS.2022.1003243
Abstract: The precision spray of liquid fertilizer and pesticide to plants is an important task for agricultural robots in precision agriculture. By reducing the amount of chemicals being sprayed, it brings in a more economic and eco-friendly solution compared to conventional non-discriminated spray. The prerequisite of precision spray is to detect and track each plant. Conventional detection or segmentation methods detect all plants in the image captured under the robotic platform, without knowing the ID of the plant. To spray pesticides to each plant exactly once, tracking of every plant is needed in addition to detection. In this paper, we present LettuceTrack, a novel Multiple Object Tracking (MOT) method to simultaneously detect and track lettuces. When the ID of each plant is obtained from the tracking method, the robot knows whether a plant has been sprayed before therefore it will only spray the plant that has not been sprayed. The proposed method adopts YOLO-V5 for detection of the lettuces, and a novel plant feature extraction and data association algorithms are introduced to effectively track all plants. The proposed method can recover the ID of a plant even if the plant moves out of the field of view of camera before, for which existing Multiple Object Tracking (MOT) methods usually fail and assign a new plant ID. Experiments are conducted to show the effectiveness of the proposed method, and a comparison with four state-of-the-art Multiple Object Tracking (MOT) methods is shown to prove the superior performance of the proposed method in the lettuce tracking application and its limitations. Though the proposed method is tested with lettuce, it can be potentially applied to other vegetables such as broccoli or sugar beat.
Publisher: American Society of Agricultural and Biological Engineers
Date: 2022
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 10-2015
Publisher: MDPI AG
Date: 18-10-2023
DOI: 10.3390/ANI13203250
Publisher: Elsevier BV
Date: 2022
Publisher: Elsevier BV
Date: 11-2021
Publisher: Elsevier BV
Date: 02-2022
Publisher: Elsevier BV
Date: 04-2021
Publisher: Elsevier BV
Date: 03-2023
Publisher: Springer International Publishing
Date: 2023
Publisher: Frontiers Media SA
Date: 09-06-2022
Abstract: Grape downy mildew (GDM) disease is a common plant leaf disease, and it causes serious damage to grape production, reducing yield and fruit quality. Traditional manual disease detection relies on farm experts and is often time-consuming. Computer vision technologies and artificial intelligence could provide automatic disease detection for real-time controlling the spread of disease on the grapevine in precision viticulture. To achieve the best trade-off between GDM detection accuracy and speed under natural environments, a deep learning based approach named YOLOv5-CA is proposed in this study. Here coordinate attention (CA) mechanism is integrated into YOLOv5, which highlights the downy mildew disease-related visual features to enhance the detection performance. A challenging GDM dataset was acquired in a vineyard under a nature scene (consisting of different illuminations, shadows, and backgrounds) to test the proposed approach. Experimental results show that the proposed YOLOv5-CA achieved a detection precision of 85.59%, a recall of 83.70%, and a mAP@0.5 of 89.55%, which is superior to the popular methods, including Faster R-CNN, YOLOv3, and YOLOv5. Furthermore, our proposed approach with inference occurring at 58.82 frames per second, could be deployed for the real-time disease control requirement. In addition, the proposed YOLOv5-CA based approach could effectively capture leaf disease related visual features resulting in higher GDE detection accuracy. Overall, this study provides a favorable deep learning based approach for the rapid and accurate diagnosis of grape leaf diseases in the field of automatic disease detection.
Publisher: MDPI AG
Date: 30-09-2022
Abstract: Achieving rapid and accurate detection of apple leaf diseases in the natural environment is essential for the growth of apple plants and the development of the apple industry. In recent years, deep learning has been widely studied and applied to apple leaf disease detection. However, existing networks have too many parameters to be easily deployed or lack research on leaf diseases in complex backgrounds to effectively use in real agricultural environments. This study proposes a novel deep learning network, YOLOX-ASSANano, which is an improved lightweight real-time model for apple leaf disease detection based on YOLOX-Nano. We improved the YOLOX-Nano backbone using a designed asymmetric ShuffleBlock, a CSP-SA module, and blueprint-separable convolution (BSConv), which significantly enhance feature-extraction capability and boost detection performance. In addition, we construct a multi-scene apple leaf disease dataset (MSALDD) for experiments. The experimental results show that the YOLOX-ASSANano model with only 0.83 MB parameters achieves 91.08% mAP on MSALDD and 58.85% mAP on the public dataset PlantDoc with a speed of 122 FPS. This study indicates that the YOLOX-ASSANano provides a feasible solution for the real-time diagnosis of apple leaf diseases in natural scenes, and could be helpful for the detection of other plant diseases.
Publisher: Elsevier BV
Date: 2023
DOI: 10.2139/SSRN.4385926
Publisher: IEEE
Date: 10-2016
Publisher: American Society of Agricultural and Biological Engineers
Date: 2013
Publisher: Frontiers Media SA
Date: 18-11-2022
Publisher: Elsevier BV
Date: 2019
No related grants have been discovered for Yongliang Qiao.