ORCID Profile
0000-0002-3029-6717
Current Organisations
Full Professor
,
University of Tasmania
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 05-2014
Publisher: Elsevier BV
Date: 2021
Publisher: MDPI AG
Date: 17-10-2021
Abstract: Uncooled thermal infrared sensors are increasingly being deployed on unmanned aerial systems (UAS) for agriculture, forestry, wildlife surveys, and surveillance. The acquisition of thermal data requires accurate and uniform testing of equipment to ensure precise temperature measurements. We modified an uncooled thermal infrared sensor, specifically designed for UAS remote sensing, with a proprietary external heated shutter as a calibration source. The performance of the modified thermal sensor and a standard thermal sensor (i.e., without a heated shutter) was compared under both field and temperature modulated laboratory conditions. During laboratory trials with a blackbody source at 35 °C over a 150 min testing period, the modified and unmodified thermal sensor produced temperature ranges of 34.3–35.6 °C and 33.5–36.4 °C, respectively. A laboratory experiment also included the simulation of flight conditions by introducing airflow over the thermal sensor at a rate of 4 m/s. With the blackbody source held at a constant temperature of 25 °C, the introduction of 2 min air flow resulted in a ’shock cooling’ event in both the modified and unmodified sensors, oscillating between 19–30 °C and -15–65 °C, respectively. Following the initial ‘shock cooling’ event, the modified and unmodified thermal sensor oscillated between 22–27 °C and 5–45 °C, respectively. During field trials conducted over a pine plantation, the modified thermal sensor also outperformed the unmodified sensor in a side-by-side comparison. We found that the use of a mounted heated shutter improved thermal measurements, producing more consistent accurate temperature data for thermal mapping projects.
Publisher: MDPI AG
Date: 05-02-2015
DOI: 10.3390/RS70201736
Publisher: MDPI AG
Date: 10-07-2015
Publisher: Elsevier BV
Date: 06-2018
Publisher: MDPI AG
Date: 18-03-2021
DOI: 10.3390/FIRE4010014
Abstract: With an increase in the frequency and severity of wildfires across the globe and resultant changes to long-established fire regimes, the mapping of fire severity is a vital part of monitoring ecosystem resilience and recovery. The emergence of unoccupied aircraft systems (UAS) and compact sensors (RGB and LiDAR) provide new opportunities to map fire severity. This paper conducts a comparison of metrics derived from UAS Light Detecting and Ranging (LiDAR) point clouds and UAS image based products to classify fire severity. A workflow which derives novel metrics describing vegetation structure and fire severity from UAS remote sensing data is developed that fully utilises the vegetation information available in both data sources. UAS imagery and LiDAR data were captured pre- and post-fire over a 300 m by 300 m study area in Tasmania, Australia. The study area featured a vegetation gradient from sedgeland vegetation (e.g., button grass 0.2m) to forest (e.g., Eucalyptus obliqua and Eucalyptus globulus 50m). To classify the vegetation and fire severity, a comprehensive set of variables describing structural, textural and spectral characteristics were gathered using UAS images and UAS LiDAR datasets. A recursive feature elimination process was used to highlight the subsets of variables to be included in random forest classifiers. The classifier was then used to map vegetation and severity across the study area. The results indicate that UAS LiDAR provided similar overall accuracy to UAS image and combined (UAS LiDAR and UAS image predictor values) data streams to classify vegetation (UAS image: 80.6% UAS LiDAR: 78.9% and Combined: 83.1%) and severity in areas of forest (UAS image: 76.6%, UAS LiDAR: 74.5% and Combined: 78.5%) and areas of sedgeland (UAS image: 72.4% UAS LiDAR: 75.2% and Combined: 76.6%). These results indicate that UAS SfM and LiDAR point clouds can be used to assess fire severity at very high spatial resolution.
Publisher: MDPI AG
Date: 14-05-2012
DOI: 10.3390/RS4051392
Publisher: MDPI AG
Date: 10-06-2020
DOI: 10.3390/S20113316
Abstract: Thermal infrared cameras provide unique information on surface temperature that can benefit a range of environmental, industrial and agricultural applications. However, the use of uncooled thermal cameras for field and unmanned aerial vehicle (UAV) based data collection is often h ered by vignette effects, sensor drift, ambient temperature influences and measurement bias. Here, we develop and apply an ambient temperature-dependent radiometric calibration function that is evaluated against three thermal infrared sensors (Apogee SI-11(Apogee Electronics, Santa Monica, CA, USA), FLIR A655sc (FLIR Systems, Wilsonville, OR, USA), TeAx 640 (TeAx Technology, Wilnsdorf, Germany)). Upon calibration, all systems demonstrated significant improvement in measured surface temperatures when compared against a temperature modulated black body target. The laboratory calibration process used a series of calibrated resistance temperature detectors to measure the temperature of a black body at different ambient temperatures to derive calibration equations for the thermal data acquired by the three sensors. As a point-collecting device, the Apogee sensor was corrected for sensor bias and ambient temperature influences. For the 2D thermal cameras, each pixel was calibrated independently, with results showing that measurement bias and vignette effects were greatly reduced for the FLIR A655sc (from a root mean squared error (RMSE) of 6.219 to 0.815 degrees Celsius (℃)) and TeAx 640 (from an RMSE of 3.438 to 1.013 ℃) cameras. This relatively straightforward approach for the radiometric calibration of infrared thermal sensors can enable more accurate surface temperature retrievals to support field and UAV-based data collection efforts.
Publisher: MDPI AG
Date: 25-05-2012
DOI: 10.3390/RS4061519
Publisher: Elsevier BV
Date: 12-2014
Publisher: Springer Netherlands
Date: 2011
Publisher: MDPI AG
Date: 02-05-2014
DOI: 10.3390/RS6054003
Publisher: MDPI AG
Date: 07-03-2016
DOI: 10.3390/F7030062
Publisher: Elsevier BV
Date: 10-2020
Publisher: MDPI AG
Date: 14-12-2018
Abstract: The sub-alpine and alpine Sphagnum peatlands in Australia are geographically constrained to poorly drained areas c. 1000 m a.s.l. Sphagnum is an important contributor to the resilience of peatlands however, it is also very sensitive to fire and often shows slow recovery after being damaged. Recovery is largely dependent on a sufficient water supply and impeded drainage. Monitoring the fragmented areas of Australia’s peatlands can be achieved by capturing ultra-high spatial resolution imagery from an unmanned aerial systems (UAS). High resolution digital surface models (DSMs) can be created from UAS imagery, from which hydrological models can be derived to monitor hydrological changes and assist with rehabilitation of damaged peatlands. One of the constraints of the use of UAS is the intensive fieldwork required. The need to distribute ground control points (GCPs) adds to fieldwork complexity. GCPs are often used for georeferencing of the UAS imagery, as well as for removal of artificial tilting and doming of the photogrammetric model created by camera distortions. In this study, Tasmania’s northern peatlands were mapped to test the viability of creating hydrological models. The case study was further used to test three different GCP scenarios to assess the effect on DSM quality. From the five scenarios, three required the use of all (16–20) GCPs to create accurate DSMs, whereas the two other sites provided accurate DSMs when only using four GCPs. Hydrological maps produced with the TauDEM tools software package showed high visual accuracy and a good potential for rehabilitation guidance, when using ground- controlled DSMs.
Publisher: Copernicus GmbH
Date: 27-07-2012
DOI: 10.5194/ISPRSARCHIVES-XXXIX-B1-429-2012
Abstract: Abstract. This study is the first to use an Unmanned Aerial Vehicle (UAV) for mapping moss beds in Antarctica. Mosses can be used as indicators for the regional effects of climate change. Mapping and monitoring their extent and health is therefore important. UAV aerial photography provides ultra-high resolution spatial data for this purpose. We developed a technique to extract an extremely dense 3D point cloud from overlapping UAV aerial photography based on structure from motion (SfM) algorithms. The combination of SfM and patch-based multi-view stereo image vision algorithms resulted in a 2 cm resolution digital terrain model (DTM). This detailed topographic information combined with vegetation indices derived from a 6-band multispectral sensor enabled the assessment of moss bed health. This novel UAV system has allowed us to map different environmental characteristics of the moss beds at ultra-high resolution providing us with a better understanding of these fragile Antarctic ecosystems. The paper provides details on the different UAV instruments and the image processing framework resulting in DEMs, vegetation indices, and terrain derivatives.
Publisher: Wiley
Date: 12-2021
DOI: 10.1111/EMR.12505
Abstract: The benefits of using remote sensing technologies for informing and monitoring ecological restoration of forests from the community to the in idual are presented. At the community level, we link remotely sensed measures of structural complexity with animal behaviour. At the plot level, we monitor the return of vegetation structure and ecosystem services (e.g. carbon sequestration) using data‐rich three‐dimensional point clouds. At the in idual‐level, we use high‐resolution images to accurately classify plants to species and provenance and show genetic‐based variation in canopy structural traits. To facilitate the wider use of remote sensing in restoration, we discuss the challenges that remain to be resolved.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 10-2019
Publisher: Wiley
Date: 13-10-2023
DOI: 10.1002/RSE2.371
Publisher: SAGE Publications
Date: 24-12-2014
Abstract: In this study, we present a flexible, cost-effective, and accurate method to monitor landslides using a small unmanned aerial vehicle (UAV) to collect aerial photography. In the first part, we apply a Structure from Motion (SfM) workflow to derive a 3D model of a landslide in southeast Tasmania from multi-view UAV photography. The geometric accuracy of the 3D model and resulting DEMs and orthophoto mosaics was tested with ground control points coordinated with geodetic GPS receivers. A horizontal accuracy of 7 cm and vertical accuracy of 6 cm was achieved. In the second part, two DEMs and orthophoto mosaics acquired on 16 July 2011 and 10 November 2011 were compared to study landslide dynamics. The COSI-Corr image correlation technique was evaluated to quantify and map terrain displacements. The magnitude and direction of the displacement vectors derived from correlating two hillshaded DEM layers corresponded to a visual interpretation of landslide change. Results show that the algorithm can accurately map displacements of the toes, chunks of soil, and vegetation patches on top of the landslide, but is not capable of mapping the retreat of the main scarp. The conclusion is that UAV-based imagery in combination with 3D scene reconstruction and image correlation algorithms provide flexible and effective tools to map and monitor landslide dynamics.
Publisher: Elsevier BV
Date: 04-2014
Publisher: Springer Science and Business Media LLC
Date: 07-01-2020
Publisher: MDPI AG
Date: 20-12-2019
DOI: 10.3390/RS12010034
Abstract: Hyperspectral systems integrated on unmanned aerial vehicles (UAV) provide unique opportunities to conduct high-resolution multitemporal spectral analysis for erse applications. However, additional time-consuming rectification efforts in postprocessing are routinely required, since geometric distortions can be introduced due to UAV movements during flight, even if navigation/motion sensors are used to track the position of each scan. Part of the challenge in obtaining high-quality imagery relates to the lack of a fast processing workflow that can retrieve geometrically accurate mosaics while optimizing the ground data collection efforts. To address this problem, we explored a computationally robust automated georectification and mosaicking methodology. It operates effectively in a parallel computing environment and evaluates results against a number of high-spatial-resolution datasets (mm to cm resolution) collected using a push-broom sensor and an associated RGB frame-based camera. The methodology estimates the luminance of the hyperspectral swaths and coregisters these against a luminance RGB-based orthophoto. The procedure includes an improved coregistration strategy by integrating the Speeded-Up Robust Features (SURF) algorithm, with the Maximum Likelihood Estimator S le Consensus (MLESAC) approach. SURF identifies common features between each swath and the RGB-orthomosaic, while MLESAC fits the best geometric transformation model to the retrieved matches. In idual scanlines are then geometrically transformed and merged into a single spatially continuous mosaic reaching high positional accuracies only with a few number of ground control points (GCPs). The capacity of the workflow to achieve high spatial accuracy was demonstrated by examining statistical metrics such as RMSE, MAE, and the relative positional accuracy at 95% confidence level. Comparison against a user-generated georectification demonstrates that the automated approach speeds up the coregistration process by 85%.
Start Date: 2011
End Date: 2011
Funder: Winifred Violet Scott Charitable Trust
View Funded Activity