ORCID Profile
0000-0002-3860-9458
Current Organisation
UNSW Sydney
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
Publisher: Hindawi Limited
Date: 18-10-2023
DOI: 10.1155/2023/8634742
Publisher: Elsevier BV
Date: 2019
Publisher: Elsevier BV
Date: 11-2020
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2020
Publisher: IEEE
Date: 05-2011
Publisher: IEEE
Date: 10-2009
Publisher: Elsevier BV
Date: 2016
Publisher: IEEE
Date: 10-2009
Publisher: Elsevier BV
Date: 06-2021
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2017
DOI: 10.1109/MPRV.2017.13
Publisher: SAGE Publications
Date: 03-2018
Publisher: IEEE
Date: 05-2015
Publisher: IEEE
Date: 09-2015
Publisher: Elsevier BV
Date: 05-2017
Publisher: Elsevier BV
Date: 08-2018
Publisher: Elsevier BV
Date: 12-2015
Publisher: Hindawi Limited
Date: 26-06-2020
DOI: 10.1111/AJGW.12444
Publisher: Elsevier BV
Date: 05-2021
Publisher: Elsevier BV
Date: 2019
Publisher: Elsevier BV
Date: 03-2019
Publisher: IEEE
Date: 09-2013
Publisher: CSIRO Publishing
Date: 2017
DOI: 10.1071/AN16460
Abstract: The objective for the present trial was to understand whether dairy heifers could be trained to respond to an audio cue paired with a feed reward. The use of acoustic conditioning to induce cattle movement has not previously been tested with animal-mounted devices to call cattle both in idually and as a group. Five heifers underwent testing for 6 days as part of an 18-day field trial (12 days of conditioning). The 6-day testing and data-collection period involved the heifers being called via a smartphone device mounted on the cheek strap of a halter. Heifers were called either as in iduals or as a group. When the audio cue was sent, heifers were expected to traffic from a group-holding area to a feeding area (~80-m distance) to receive an allocation of a grain-based concentrate. Heifers were significantly (P = 0.001) more likely to approach the feeding area when called as a group (91% response rate) than when they were called as in iduals (67% response rate). When heifers did respond to being called, their time to traffic to the feed area was quicker (P 0.001) when they were called as a group (77.9 ± 55.4 s) than when they were called as in iduals (139.3 ± 89.2 s). The present trial has shown that animals can be trained to respond to an audio cue paired to a feed reward, highlighting the potential for acoustic conditioning to improve voluntary cow movement with an animal-mounted device. It also highlights the limitations of cattle responding to being called in idually compared with being called as a group.
Publisher: Frontiers Media SA
Date: 25-09-2020
Publisher: Modestum Publishing Ltd
Date: 03-01-2019
Publisher: Elsevier BV
Date: 04-2019
Publisher: Elsevier BV
Date: 2019
Publisher: Springer Science and Business Media LLC
Date: 09-03-2021
DOI: 10.1186/S13007-021-00727-4
Abstract: Stomata analysis using microscope imagery provides important insight into plant physiology, health and the surrounding environmental conditions. Plant scientists are now able to conduct automated high-throughput analysis of stomata in microscope data, however, existing detection methods are sensitive to the appearance of stomata in the training images, thereby limiting general applicability. In addition, existing methods only generate bounding-boxes around detected stomata, which require users to implement additional image processing steps to study stomata morphology. In this paper, we develop a fully automated, robust stomata detection algorithm which can also identify in idual stomata boundaries regardless of the plant species, s le collection method, imaging technique and magnification level. The proposed solution consists of three stages. First, the input image is pre-processed to remove any colour space biases occurring from different s le collection and imaging techniques. Then, a Mask R-CNN is applied to estimate in idual stomata boundaries. The feature pyramid network embedded in the Mask R-CNN is utilised to identify stomata at different scales. Finally, a statistical filter is implemented at the Mask R-CNN output to reduce the number of false positive generated by the network. The algorithm was tested using 16 datasets from 12 sources, containing over 60,000 stomata. For the first time in this domain, the proposed solution was tested against 7 microscope datasets never seen by the algorithm to show the generalisability of the solution. Results indicated that the proposed approach can detect stomata with a precision, recall, and F-score of 95.10%, 83.34%, and 88.61%, respectively. A separate test conducted by comparing estimated stomata boundary values with manually measured data showed that the proposed method has an IoU score of 0.70 a 7% improvement over the bounding-box approach. The proposed method shows robust performance across multiple microscope image datasets of different quality and scale. This generalised stomata detection algorithm allows plant scientists to conduct stomata analysis whilst eliminating the need to re-label and re-train for each new dataset. The open-source code shared with this project can be directly deployed in Google Colab or any other Tensorflow environment.
Publisher: Elsevier BV
Date: 06-2020
Publisher: Elsevier BV
Date: 2016
Publisher: Springer Science and Business Media LLC
Date: 08-11-2017
Publisher: OSA
Date: 2016
Publisher: Elsevier BV
Date: 07-2015
Publisher: Hindawi Limited
Date: 20-11-2018
DOI: 10.1111/AJGW.12374
Publisher: Wiley
Date: 25-07-2012
DOI: 10.1002/ROB.21432
Publisher: IEEE
Date: 11-2012
Publisher: International Society for Horticultural Science (ISHS)
Date: 04-2018
Publisher: Elsevier BV
Date: 2019
Publisher: Emerald
Date: 28-03-2008
DOI: 10.1108/02602280810856688
Abstract: The purpose of this paper is to present a localisation system for an indoor rotary‐wing micro aerial vehicle (MAV) that uses three onboard LEDs and base station mounted active vision unit. A pair of blade mounted cyan LEDs and a tail mounted red LED are used as on‐board landmarks. A base station tracks the landmarks and estimates the pose of the MAV in real time by analysing images taken using an active vision unit. In each image, the ellipse formed by the cyan LEDs is used for 5 degree of freedom (DoF) pose estimation with yaw estimation from the red LED providing the 6th DoF. About 1‐3.5 per cent localisation error of the MAV at various ranges, rolls and angular speeds less than 45°/s relative to the base station at known location indicates that the MAV can be accurately localised at 9‐12 Hz in an indoor environment. Line‐of‐sight between the base station and MAV is necessary while limited accuracy is evident in yaw estimation at long distances. Additional yaw sensors and dynamic zoom are among future work. Provided an unmanned ground vehicle (UGV) as the base station equipped with its own localisation sensor, the developed system encourages the use of autonomous indoor rotary‐wing MAVs in various robotics applications, such as urban search and rescue. The most significant contribution of this paper is the innovative LED configuration allowing full 6 DoF pose estimation using three LEDs, one camera and no fixed infrastructure. The active vision unit enables a wide range of observable flight as the ellipse generated by the cyan LEDs is recognisable from almost any direction.
Start Date: 2017
End Date: End date not available
Funder: Horticulture Innovation Australia
View Funded Activity