ORCID Profile
0000-0002-3017-5464
Current Organisation
University of Tokyo
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
Publisher: CSIRO Publishing
Date: 2017
DOI: 10.1071/FP16123
Abstract: Ground cover is an important physiological trait affecting crop radiation capture, water-use efficiency and grain yield. It is challenging to efficiently measure ground cover with reasonable precision for large numbers of plots, especially in tall crop species. Here we combined two image-based methods to estimate plot-level ground cover for three species, from either an ortho-mosaic or undistorted (i.e. corrected for lens and camera effects) images captured by cameras using a low-altitude unmanned aerial vehicle (UAV). Reconstructed point clouds and ortho-mosaics for the whole field were created and a customised image processing workflow was developed to (1) segment the ‘whole-field’ datasets into in idual plots, and (2) ‘reverse-calculate’ each plot from each undistorted image. Ground cover for in idual plots was calculated by an efficient vegetation segmentation algorithm. For 79% of plots, estimated ground cover was greater from the ortho-mosaic than from images, particularly when plants were small, or when older/taller in large plots. While there was a good agreement between the ground cover estimates from ortho-mosaic and images when the target plot was positioned at a near-nadir view near the centre of image (cotton: R2 = 0.97, sorghum: R2 = 0.98, sugarcane: R2 = 0.84), ortho-mosaic estimates were 5% greater than estimates from these near-nadir images. Because each plot appeared in multiple images, there were multiple estimates of the ground cover, some of which should be excluded, e.g. when the plot is near edge within an image. Considering only the images with near-nadir view, the reverse calculation provides a more precise estimate of ground cover compared with the ortho-mosaic. The methodology is suitable for high throughput phenotyping for applications in agronomy, physiology and breeding for different crop species and can be extended to provide pixel-level data from other types of cameras including thermal and multi-spectral models.
Publisher: Frontiers Media SA
Date: 26-02-2019
Publisher: Springer Science and Business Media LLC
Date: 19-05-2023
DOI: 10.1038/S41597-023-02098-Y
Abstract: Applying deep learning to images of cropping systems provides new knowledge and insights in research and commercial applications. Semantic segmentation or pixel-wise classification, of RGB images acquired at the ground level, into vegetation and background is a critical step in the estimation of several canopy traits. Current state of the art methodologies based on convolutional neural networks (CNNs) are trained on datasets acquired under controlled or indoor environments. These models are unable to generalize to real-world images and hence need to be fine-tuned using new labelled datasets. This motivated the creation of the VegAnn - Veg etation Ann otation - dataset, a collection of 3775 multi-crop RGB images acquired for different phenological stages using different systems and platforms in erse illumination conditions. We anticipate that VegAnn will help improving segmentation algorithm performances, facilitate benchmarking and promote large-scale crop vegetation segmentation research.
Publisher: American Association for the Advancement of Science (AAAS)
Date: 2020
Abstract: The detection of wheat heads in plant images is an important task for estimating pertinent wheat traits including head population density and head characteristics such as health, size, maturity stage, and the presence of awns. Several studies have developed methods for wheat head detection from high-resolution RGB imagery based on machine learning algorithms. However, these methods have generally been calibrated and validated on limited datasets. High variability in observational conditions, genotypic differences, development stages, and head orientation makes wheat head detection a challenge for computer vision. Further, possible blurring due to motion or wind and overlap between heads for dense populations make this task even more complex. Through a joint international collaborative effort, we have built a large, erse, and well-labelled dataset of wheat images, called the Global Wheat Head Detection (GWHD) dataset. It contains 4700 high-resolution RGB images and 190000 labelled wheat heads collected from several countries around the world at different growth stages with a wide range of genotypes. Guidelines for image acquisition, associating minimum metadata to respect FAIR principles, and consistent head labelling methods are proposed when developing new head detection datasets. The GWHD dataset is publicly available at nd aimed at developing and benchmarking methods for wheat head detection.
Publisher: Elsevier BV
Date: 08-2019
Publisher: Frontiers Media SA
Date: 23-10-2018
Publisher: American Association for the Advancement of Science (AAAS)
Date: 2021
Abstract: The Global Wheat Head Detection (GWHD) dataset was created in 2020 and has assembled 193,634 labelled wheat heads from 4700 RGB images acquired from various acquisition platforms and 7 countries/institutions. With an associated competition hosted in Kaggle, GWHD_2020 has successfully attracted attention from both the computer vision and agricultural science communities. From this first experience, a few avenues for improvements have been identified regarding data size, head ersity, and label reliability. To address these issues, the 2020 dataset has been reexamined, relabeled, and complemented by adding 1722 images from 5 additional countries, allowing for 81,553 additional wheat heads. We now release in 2021 a new version of the Global Wheat Head Detection dataset, which is bigger, more erse, and less noisy than the GWHD_2020 version.
Publisher: Elsevier BV
Date: 03-2018
No related grants have been discovered for WEI GUO.