ORCID Profile
0000-0002-5206-3842
Current Organisation
University of Western Australia
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
In Research Link Australia (RLA), "Research Topics" refer to ANZSRC FOR and SEO codes. These topics are either sourced from ANZSRC FOR and SEO codes listed in researchers' related grants or generated by a large language model (LLM) based on their publications.
Artificial Intelligence and Image Processing | Computer Vision | Pattern Recognition and Data Mining | Computer Vision | Photogrammetry and Remote Sensing | Adaptive Agents and Intelligent Robotics | Astronomical and Space Sciences | Civil Geotechnical Engineering | Fisheries Sciences | Astronomical and Space Sciences not elsewhere classified | Image Processing | Artificial Intelligence and Image Processing not elsewhere classified | Stochastic Analysis and Modelling | Aquatic Ecosystem Studies and Stock Assessment | Microelectromechanical Systems (MEMS) | Maritime Engineering | Aquaculture | Ocean Engineering |
Road Public Transport | Computer Software and Services not elsewhere classified | Expanding Knowledge in Technology | Computer software and services not elsewhere classified | Information processing services | Aquaculture Tuna | Wild Caught Tuna | Plant Production and Plant Primary Products not elsewhere classified | Wild Caught Fin Fish (excl. Tuna) | Aquaculture Fin Fish (excl. Tuna) | Commercial Construction Planning | National Security | Crime Prevention | Industrial Energy Conservation and Efficiency | Information Processing Services (incl. Data Entry and Capture) | Fisheries - Aquaculture not elsewhere classified | Application Tools and System Utilities | Defence not elsewhere classified | Evaluation of Health Outcomes | Disability and Functional Capacity | Health Related to Ageing | Expanding Knowledge in the Information and Computing Sciences | Ecosystem Assessment and Management of Marine Environments
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 07-2018
Publisher: Springer Science and Business Media LLC
Date: 26-08-2021
Publisher: IEEE
Date: 06-2015
Publisher: Springer Berlin Heidelberg
Date: 2004
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 12-2016
Publisher: IEEE
Date: 06-2013
Publisher: Springer Science and Business Media LLC
Date: 09-04-2021
Publisher: IEEE
Date: 09-2008
Publisher: IEEE
Date: 2021
Publisher: Elsevier BV
Date: 03-2013
Publisher: IEEE
Date: 06-2015
Publisher: Elsevier BV
Date: 04-0309
Publisher: Marine Technology Society
Date: 2016
DOI: 10.4031/MTSJ.50.1.1
Abstract: Abstract Underwater video systems are widely used for counting and measuring fish in aquaculture, fisheries, and conservation management. To determine population counts, spatial or temporal frequencies, and age or weight distributions, snout to tail fork length measurements are performed in video sequences, most commonly using a point and click process by a human operator. Current research aims to automate the identification, measurement, and counting of fish in order to improve the efficiency of population counts or biomass estimates. A fully automated process requires the detection and isolation of candidates for measurement, followed by the snout to tail fork length measurement, species classification, as well as the counting and tracking of fish. This paper reviews the algorithms used for the detection, identification, measurement, counting, and tracking of fish in underwater video sequences. The paper analyzes the most commonly used approaches, leading to an evaluation of the techniques most likely to be a comprehensive solution to the complete process of candidate detection, species identification, length measurement, and population counts for biomass estimation.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2021
Publisher: Elsevier BV
Date: 2022
DOI: 10.1016/J.COMPBIOMED.2021.105087
Abstract: Accessibility of labelled datasets is often a key limitation for the application of Machine Learning in clinical research. A novel semi-automated weak-labelling approach based on unsupervised clustering was developed to classify a large dataset of microneurography signals and subsequently used to train a Neural Network to reproduce the labelling process. Clusters of microneurography signals were created with k-means and then labelled in terms of the validity of the signals contained in each cluster. Only purely positive or negative clusters were labelled, whereas clusters with mixed content were passed on to the next iteration of the algorithm to undergo another cycle of unsupervised clustering and labelling of the clusters. After several iterations of this process, only pure labelled clusters remained which were used to train a Deep Neural Network. Overall, 334,548 in idual signal peaks form the integrated data were extracted and more than 99.99% of the data was labelled in six iterations of this novel application of weak labelling with the help of a domain expert. A Deep Neural Network trained based on this dataset achieved consistent accuracies above 95%. Data extraction and the novel iterative approach of labelling unsupervised clusters enabled creation of a large, labelled dataset combining unsupervised learning and expert ratings of signal-peaks on cluster basis in a time effective manner. Further research is needed to validate the methodology and employ it on other types of physiologic data for which it may enable efficient generation of large labelled datasets.
Publisher: IEEE
Date: 2004
Publisher: IEEE
Date: 12-2012
Publisher: IEEE
Date: 11-2021
Publisher: Springer Science and Business Media LLC
Date: 03-2020
Publisher: IEEE
Date: 12-2012
Publisher: IEEE
Date: 2022
Publisher: IEEE
Date: 2015
DOI: 10.1109/WACV.2015.34
Publisher: Springer Berlin Heidelberg
Date: 2009
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 12-2013
Publisher: Institution of Engineering and Technology (IET)
Date: 12-2017
Publisher: IEEE
Date: 2013
Publisher: Oxford University Press (OUP)
Date: 22-03-2015
DOI: 10.1093/EE/NVV024
Abstract: Frost is known to directly affect flowering wheat plants (Triticum aestivum L.) and lead to reduced grain yield. Additionally, it may increase wheat susceptibility to economically important pests, such as aphids (Hemiptera: Aphididae). Wheat plants at flowering stage were exposed to one of the three temperature treatments: ambient (11-12°C), 0°C, and -3°C for 60 min. Preference (3-choice) and performance (no-choice) bioassays with aphids (Rhopalosiphum padi L.) were conducted 1, 3, 6, and 12 d after temperature treatments to assess effects of temperature-induced stress over time. As an initial feasibility study of using remote sensing technologies to detect frost-induced stress in flowering wheat plants, hyperspectral imaging data were acquired from wheat plants used in preference bioassays. Element analysis of wheat plants was included to determine the effect of temperature-induced stress on the nutritional composition of flowering wheat plants. The results from this study support the following cause-effect scenario: a 60-min exposure to low temperatures caused a significant decrease in potassium and copper content of wheat plants 6 d after temperature exposure, and it coincided with a marked increase in preference by aphids of wheat plants. The preference exhibited by aphids correlated positively with performance of aphids, so the preference-performance hypothesis was confirmed and possibly driven by potassium and copper content of wheat plants. In addition, we demonstrated that hyperspectral imaging data can be used to detect frost-induced susceptibility to aphid infestation in flowering wheat plants. These findings justify further research into airborne remote sensing of frost-induced stress and the possible secondary effects on crop susceptibility to arthropod pests.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 12-2014
Publisher: Elsevier BV
Date: 03-2021
Publisher: Wiley
Date: 05-11-2020
DOI: 10.1111/WRE.12450
Publisher: SCITEPRESS - Science and Technology Publications
Date: 2021
Publisher: IEEE
Date: 11-2016
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2022
Publisher: IEEE
Date: 30-11-2022
Publisher: IEEE
Date: 10-2021
Publisher: The Royal Society
Date: 23-03-2022
Abstract: The broad autism phenotype commonly refers to sub-clinical levels of autistic-like behaviour and cognition presented in biological relatives of autistic people. In a recent study, we reported findings suggesting that the broad autism phenotype may also be expressed in facial morphology, specifically increased facial masculinity. Increased facial masculinity has been reported among autistic children, as well as their non-autistic siblings. The present study builds on our previous findings by investigating the presence of increased facial masculinity among non-autistic parents of autistic children. Using a previously established method, a ‘facial masculinity score’ and several facial distances were calculated for each three-dimensional facial image of 192 parents of autistic children (58 males, 134 females) and 163 age-matched parents of non-autistic children (50 males, 113 females). While controlling for facial area and age, significantly higher masculinity scores and larger (more masculine) facial distances were observed in parents of autistic children relative to the comparison group, with effect sizes ranging from small to medium (0.16 ≤ d ≤ .41), regardless of sex. These findings add to an accumulating evidence base that the broad autism phenotype is expressed in physical characteristics and suggest that both maternal and paternal pathways are implicated in masculinized facial morphology.
Publisher: Springer International Publishing
Date: 2022
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2020
Publisher: Wiley
Date: 16-09-2021
DOI: 10.1002/AUR.2612
Abstract: Greater facial asymmetry has been consistently found in children with autism spectrum disorder (ASD) relative to children without ASD. There is substantial evidence that both facial structure and the recurrence of ASD diagnosis are highly heritable within a nuclear family. Furthermore, sub‐clinical levels of autistic‐like behavioural characteristics have also been reported in first‐degree relatives of in iduals with ASD, commonly known as the ‘broad autism phenotype’. Therefore, the aim of the current study was to examine whether a broad autism phenotype expresses as facial asymmetry among 192 biological parents of autistic in iduals (134 mothers) compared to those of 163 age‐matched adults without a family history of ASD (113 females). Using dense surface‐modelling techniques on three dimensional facial images, we found evidence for greater facial asymmetry in parents of autistic in iduals compared to age‐matched adults in the comparison group ( p = 0.046, d = 0.21 [0.002, 0.42]). Considering previous findings and the current results, we conclude that facial asymmetry expressed in the facial morphology of autistic children may be related to heritability factors. In a previous study, we showed that autistic children presented with greater facial asymmetry than non‐autistic children. In the current study, we examined the amount of facial asymmetry shown on three‐dimensional facial images of 192 parents of autistic children compared to a control group consisting of 163 similarly aged adults with no known history of autism. Although parents did show greater levels of facial asymmetry than those in the control group, this effect is statistically small. We concluded that the facial asymmetry previously found in autistic children may be related to genetic factors.
Publisher: IEEE
Date: 05-2019
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 07-0007
Publisher: IEEE
Date: 06-2011
Publisher: IEEE
Date: 03-2014
Publisher: Springer Berlin Heidelberg
Date: 2009
Publisher: Springer Berlin Heidelberg
Date: 2007
Publisher: IEEE
Date: 2013
Publisher: Elsevier BV
Date: 07-2020
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 02-2012
Publisher: Springer London
Date: 2012
Publisher: IEEE
Date: 06-2019
Publisher: Springer International Publishing
Date: 2022
Publisher: IEEE
Date: 27-09-2021
Publisher: Springer Science and Business Media LLC
Date: 22-09-2010
Publisher: Springer Science and Business Media LLC
Date: 11-01-2022
Publisher: IEEE
Date: 06-2019
Publisher: Emerald
Date: 06-2005
DOI: 10.1108/02602280510585745
Abstract: In model‐based recognition the 3D models of objects are stored in a model library during an offline phase. During the online recognition phase, a view of the scene is matched with the model library to identify the location and pose of certain library objects in the scene. Aims to focus on the process of 3D modeling and model‐based recognition. This paper discusses the process of 3D modeling and model‐based recognition along with their potential applications in industry with a particular emphasis on robot grasp analysis. The paper also emphasises the main challenges in these areas and give a brief literature review. In order to develop an automatic 3D model‐based object recognition system it is necessary to automate the process of 3D modeling and recognition. The challenge in automating the 3D modeling process is to develop an automatic correspondence technique. The core of recognition is the representation scheme. Recognition is an online process. Therefore, representation and matching must be very fast in order to facilitate real time recognition. There are numerous applications of 3D modeling in a variety of areas ranging from the entertainment industry to industrial automation. Some of its applications include computer graphics, virtual reality, medical imaging, reverse engineering, and 3D terrain construction. Provides information on 3D modeling which constitutes an important part of computer vision or robot vision.
Publisher: IEEE
Date: 11-2011
Publisher: Elsevier BV
Date: 11-2016
Publisher: IEEE
Date: 06-2023
Publisher: Springer Berlin Heidelberg
Date: 2006
DOI: 10.1007/11919476_86
Publisher: Elsevier BV
Date: 04-2017
Publisher: SPIE
Date: 23-05-2013
DOI: 10.1117/12.2020941
Publisher: Springer Science and Business Media LLC
Date: 17-03-2018
DOI: 10.1007/S11517-018-1802-7
Abstract: An understanding of athlete ground reaction forces and moments (GRF/Ms) facilitates the biomechanist's downstream calculation of net joint forces and moments, and associated injury risk. Historically, force platforms used to collect kinetic data are housed within laboratory settings and are not suitable for field-based installation. Given that Newton's Second Law clearly describes the relationship between a body's mass, acceleration, and resultant force, is it possible that marker-based motion capture can represent these parameters sufficiently enough to estimate GRF/Ms, and thereby minimize our reliance on surface embedded force platforms? Specifically, can we successfully use partial least squares (PLS) regression to learn the relationship between motion capture and GRF/Ms data? In total, we analyzed 11 PLS methods and achieved average correlation coefficients of 0.9804 for GRFs and 0.9143 for GRMs. Our results demonstrate the feasibility of predicting accurate GRF/Ms from raw motion capture trajectories in real-time, overcoming what has been a significant barrier to non-invasive collection of such data. In applied biomechanics research, this outcome has the potential to revolutionize athlete performance enhancement and injury prevention. Graphical Abstract Using data science to model high-fidelity motion and force plate data frees biomechanists from the laboratory.
Publisher: IEEE
Date: 07-2017
Publisher: Springer Science and Business Media LLC
Date: 16-01-2020
DOI: 10.1038/S41398-020-0695-Z
Abstract: Autism spectrum disorder is a heritable neurodevelopmental condition diagnosed based on social and communication differences. There is strong evidence that cognitive and behavioural changes associated with clinical autism aggregate with biological relatives but in milder form, commonly referred to as the ‘broad autism phenotype’. The present study builds on our previous findings of increased facial masculinity in autistic children (Sci. Rep., 7:9348, 2017) by examining whether facial masculinity represents as a broad autism phenotype in 55 non-autistic siblings (25 girls) of autistic children. Using 3D facial photogrammetry and age-matched control groups of children without a family history of ASD, we found that facial features of male siblings were more masculine than those of male controls ( n = 69 p 0.001, d = 0.81 [0.36, 1.26]). Facial features of female siblings were also more masculine than the features of female controls ( n = 60 p = 0.005, d = 0.63 [0.16, 1.10]). Overall, we demonstrated for males and females that facial masculinity in non-autistic siblings is increased compared to same-sex comparison groups. These data provide the first evidence for a broad autism phenotype expressed in a physical characteristic, which has wider implications for our understanding of the interplay between physical and cognitive development in humans.
Publisher: IEEE
Date: 18-07-2022
Publisher: IEEE
Date: 04-2020
Publisher: Springer Science and Business Media LLC
Date: 25-09-2008
Publisher: Elsevier BV
Date: 02-2017
Publisher: Springer Berlin Heidelberg
Date: 2008
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2023
Publisher: IEEE
Date: 06-2018
Publisher: Elsevier BV
Date: 05-2011
Publisher: IEEE
Date: 21-08-2022
Publisher: IEEE
Date: 10-2014
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 03-2019
Publisher: Springer Science and Business Media LLC
Date: 18-01-2011
Publisher: IEEE
Date: 06-2016
Publisher: Public Library of Science (PLoS)
Date: 12-06-2014
Publisher: Oxford University Press (OUP)
Date: 27-02-2020
Abstract: It is interesting to develop effective fish s ling techniques using underwater videos and image processing to automatically estimate and consequently monitor the fish biomass and assemblage in water bodies. Such approaches should be robust against substantial variations in scenes due to poor luminosity, orientation of fish, seabed structures, movement of aquatic plants in the background and image ersity in the shape and texture among fish of different species. Keeping this challenge in mind, we propose a unified approach to detect freely moving fish in unconstrained underwater environments using a Region-Based Convolutional Neural Network, a state-of-the-art machine learning technique used to solve generic object detection and localization problems. To train the neural network, we employ a novel approach to utilize motion information of fish in videos via background subtraction and optical flow, and subsequently combine the outcomes with the raw image to generate fish-dependent candidate regions. We use two benchmark datasets extracted from a large Fish4Knowledge underwater video repository, Complex Scenes dataset and the LifeCLEF 2015 fish dataset to validate the effectiveness of our hybrid approach. We achieve a detection accuracy (F-Score) of 87.44% and 80.02% respectively on these datasets, which advocate the utilization of our approach for fish detection task.
Publisher: World Scientific Pub Co Pte Lt
Date: 12-2005
Publisher: MDPI AG
Date: 20-11-2020
DOI: 10.3390/S20226647
Abstract: Convolutional neural networks have recently been used for multi-focus image fusion. However, some existing methods have resorted to adding Gaussian blur to focused images, to simulate defocus, thereby generating data (with ground-truth) for supervised learning. Moreover, they classify pixels as ‘focused’ or ‘defocused’, and use the classified results to construct the fusion weight maps. This then necessitates a series of post-processing steps. In this paper, we present an end-to-end learning approach for directly predicting the fully focused output image from multi-focus input image pairs. The suggested approach uses a CNN architecture trained to perform fusion, without the need for ground truth fused images. The CNN exploits the image structural similarity (SSIM) to calculate the loss, a metric that is widely accepted for fused image quality evaluation. What is more, we also use the standard deviation of a local window of the image to automatically estimate the importance of the source images in the final fused image when designing the loss function. Our network can accept images of variable sizes and hence, we are able to utilize real benchmark datasets, instead of simulated ones, to train our network. The model is a feed-forward, fully convolutional neural network that can process images of variable sizes during test time. Extensive evaluation on benchmark datasets show that our method outperforms, or is comparable with, existing state-of-the-art techniques on both objective and subjective benchmarks.
Publisher: IEEE
Date: 09-2013
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2023
Publisher: MDPI AG
Date: 20-07-2019
DOI: 10.3390/S19143199
Abstract: Forged documents and counterfeit currency can be better detected with multispectral imaging in multiple color channels instead of the usual red, green and blue. However, multispectral cameras/scanners are expensive. We propose the construction of a low cost scanner designed to capture multispectral images of documents. A standard sheet-feed scanner was modified by disconnecting its internal light source and connecting an external multispectral light source comprising of narrow band light emitting diodes (LED). A document was scanned by illuminating the scanner light guide successively with different LEDs and capturing a scan of the document. The system costs less than a hundred dollars and is portable. It can potentially be used for applications in verification of questioned documents, checks, receipts and bank notes.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2020
Publisher: Elsevier BV
Date: 09-2017
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2020
Publisher: IEEE
Date: 2004
Publisher: Elsevier BV
Date: 04-2011
Publisher: IEEE
Date: 05-2016
Publisher: Springer International Publishing
Date: 2015
Publisher: Elsevier BV
Date: 03-2016
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2021
Publisher: Springer Berlin Heidelberg
Date: 2010
Publisher: IEEE
Date: 11-2010
Publisher: Springer Berlin Heidelberg
Date: 2010
Publisher: IEEE
Date: 23-10-2022
Publisher: Springer Science and Business Media LLC
Date: 08-02-2015
Publisher: IEEE
Date: 10-2016
Publisher: Elsevier BV
Date: 04-2022
DOI: 10.1016/J.COMPBIOMED.2022.105294
Abstract: Machine Learning is transforming data processing in medical research and clinical practice. Missing data labels are a common limitation to training Machine Learning models. To overcome missing labels in a large dataset of microneurography recordings, a novel autoencoder based semi-supervised, iterative group-labelling methodology was developed. Autoencoders were systematically optimised to extract features from a dataset of 478621 signal excerpts from human microneurography recordings. Selected features were clusters with k-means and randomly selected representations of the corresponding original signals labelled as valid or non-valid muscle sympathetic nerve activity (MSNA) bursts in an iterative, purifying procedure by an expert rater. A deep neural network was trained based on the fully labelled dataset. Three autoencoders, two based on fully connected neural networks and one based on convolutional neural network, were chosen for feature learning. Iterative clustering followed by labelling of complete clusters resulted in all 478621 signal peak excerpts being labelled as valid or non-valid within 13 iterations. Neural networks trained with the labelled dataset achieved, in a cross validation step with a testing dataset not included in training, on average 93.13% accuracy and 91% area under the receiver operating curve (AUC ROC). The described labelling procedure enabled efficient labelling of a large dataset of physiological signal based on expert ratings. The procedure based on autoencoders may be broadly applicable to a wide range of datasets without labels that require expert input and may be utilised for Machine Learning applications if weak-labels were available.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2021
Publisher: Springer Science and Business Media LLC
Date: 20-09-2009
Publisher: Elsevier BV
Date: 06-2017
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 10-2021
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2022
Publisher: Springer International Publishing
Date: 2015
Publisher: The Royal Society
Date: 07-10-2015
Abstract: Prenatal testosterone may have a powerful masculinizing effect on postnatal physical characteristics. However, no study has directly tested this hypothesis. Here, we report a 20-year follow-up study that measured testosterone concentrations from the umbilical cord blood of 97 male and 86 female newborns, and procured three-dimensional facial images on these participants in adulthood (range: 21–24 years). Twenty-three Euclidean and geodesic distances were measured from the facial images and an algorithm identified a set of six distances that most effectively distinguished adult males from females. From these distances, a ‘gender score’ was calculated for each face, indicating the degree of masculinity or femininity. Higher cord testosterone levels were associated with masculinized facial features when males and females were analysed together ( n = 183 r = −0.59), as well as when males ( n = 86 r = −0.55) and females ( n = 97 r = −0.48) were examined separately ( p -values 0.001). The relationships remained significant and substantial after adjusting for potentially confounding variables. Adult circulating testosterone concentrations were available for males but showed no statistically significant relationship with gendered facial morphology ( n = 85, r = 0.01, p = 0.93). This study provides the first direct evidence of a link between prenatal testosterone exposure and human facial structure.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2022
Publisher: American Academy of Sleep Medicine (AASM)
Date: 15-04-2020
DOI: 10.5664/JCSM.8246
Publisher: Wiley
Date: 31-05-2016
DOI: 10.1002/LOM3.10113
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2021
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 10-2006
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 12-2015
Publisher: IEEE
Date: 2008
Publisher: IEEE
Date: 11-2014
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 03-2017
Publisher: IEEE
Date: 05-2022
Publisher: Springer Berlin Heidelberg
Date: 2009
Publisher: MDPI AG
Date: 10-09-2021
DOI: 10.3390/RS13183621
Abstract: Accurate semantic segmentation of 3D point clouds is a long-standing problem in remote sensing and computer vision. Due to the unstructured nature of point clouds, designing deep neural architectures for point cloud semantic segmentation is often not straightforward. In this work, we circumvent this problem by devising a technique to exploit structured neural architectures for unstructured data. In particular, we employ the popular convolutional neural network (CNN) architectures to perform semantic segmentation of LiDAR data. We propose a projection-based scheme that performs an angle-wise slicing of large 3D point clouds and transforms those slices into 2D grids. Accounting for intensity and reflectivity of the LiDAR input, the 2D grid allows us to construct a pseudo image for the point cloud slice. We enhance this image with low-level image processing techniques of normalization, histogram equalization, and decorrelation stretch to suit our ultimate object of semantic segmentation. A large number of images thus generated are used to induce an encoder-decoder CNN model that learns to compute a segmented 2D projection of the scene, which we finally back project to the 3D point cloud. In addition to a novel method, this article also makes a second major contribution of introducing the enhanced version of our large-scale public PC-Urban outdoor dataset which is captured in a civic setup with an Ouster LiDAR sensor. The updated dataset (PC-Urban_V2) provides nearly 8 billion points including over 100 million points labeled for 25 classes of interest. We provide a thorough evaluation of our technique on PC-Urban_V2 and three other public datasets.
Publisher: IEEE
Date: 12-2019
Publisher: SPIE-Intl Soc Optical Eng
Date: 05-08-2016
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 06-2017
Publisher: Elsevier BV
Date: 02-2022
DOI: 10.1016/J.CMPB.2021.106588
Abstract: Ambulatory blood pressure monitoring (ABPM) is usually reported in descriptive values such as circadian averages and standard deviations. Making use of the original, in idual blood pressure measurements may be advantageous, particularly for research purposes, as this increases the flexibility of the analytical process, enables alternative statistical analyses and provide novel insights. Here we describe the development of a new multistep, hierarchical data extraction algorithm to collect raw data from .pdf reports and text files as part of a large multi-center clinical study. Original reports were saved in a nested file system, from which they were automatically extracted, read and saved into databases with custom made programs written in Python 3. Data were further processed, cleaned and relevant descriptive statistics such as averages and standard deviations calculated according to a variety of definitions of day- and night-time. Additionally, data control mechanisms for manual review of the data and programmatic auto-detection of extraction errors was implemented as part of the project. The developed algorithm extracted 97% of the data automatically, the missing data consisted mostly of reports that were saved incorrectly or not formatted in the specified way. Manual checks comparing s les of the extracted data to original reports indicated a high level of accuracy of the extracted data, no errors introduced due to flaws in the extraction software were detected in the extracted dataset. The developed multistep, hierarchical data extraction algorithm facilitated collection from different file formats and paired with database cleaning and data processing steps led to an effective and accurate assembly of raw ABPM data for further and adjustable analyses. Manual work was minimized while data quality was ensured with standardized, reproducible procedures.
Publisher: British Machine Vision Association
Date: 2005
DOI: 10.5244/C.19.33
Publisher: Elsevier BV
Date: 02-2015
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 04-2022
Publisher: MDPI AG
Date: 26-03-2021
DOI: 10.3390/S21072328
Abstract: Conventional methods of uniformly spraying fields to combat weeds, requires large herbicide inputs at significant cost with impacts on the environment. More focused weed control methods such as site-specific weed management (SSWM) have become popular but require methods to identify weed locations. Advances in technology allows the potential for automated methods such as drone, but also ground-based sensors for detecting and mapping weeds. In this study, the capability of Light Detection and Ranging (LiDAR) sensors were assessed to detect and locate weeds. For this purpose, two trials were performed using artificial targets (representing weeds) at different heights and diameter to understand the detection limits of a LiDAR. The results showed the detectability of the target at different scanning distances from the LiDAR was directly influenced by the size of the target and its orientation toward the LiDAR. A third trial was performed in a wheat plot where the LiDAR was used to scan different weed species at various heights above the crop canopy, to verify the capacity of the stationary LiDAR to detect weeds in a field situation. The results showed that 100% of weeds in the wheat plot were detected by the LiDAR, based on their height differences with the crop canopy.
Publisher: IEEE
Date: 12-2019
Publisher: IEEE
Date: 11-2017
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2023
Publisher: IEEE
Date: 11-2021
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 12-2017
Publisher: IEEE
Date: 21-08-2022
Publisher: Oxford University Press (OUP)
Date: 18-07-2016
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 10-2022
Publisher: British Machine Vision Association
Date: 2012
DOI: 10.5244/C.26.51
Publisher: IEEE
Date: 11-2013
Publisher: IEEE
Date: 11-2013
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 12-2016
Publisher: MDPI AG
Date: 07-2021
DOI: 10.3390/S21134535
Abstract: The application of artificial intelligence techniques to wearable sensor data may facilitate accurate analysis outside of controlled laboratory settings—the holy grail for gait clinicians and sports scientists looking to bridge the lab to field ide. Using these techniques, parameters that are difficult to directly measure in-the-wild, may be predicted using surrogate lower resolution inputs. One ex le is the prediction of joint kinematics and kinetics based on inputs from inertial measurement unit (IMU) sensors. Despite increased research, there is a paucity of information examining the most suitable artificial neural network (ANN) for predicting gait kinematics and kinetics from IMUs. This paper compares the performance of three commonly employed ANNs used to predict gait kinematics and kinetics: multilayer perceptron (MLP) long short-term memory (LSTM) and convolutional neural networks (CNN). Overall high correlations between ground truth and predicted kinematic and kinetic data were found across all investigated ANNs. However, the optimal ANN should be based on the prediction task and the intended use-case application. For the prediction of joint angles, CNNs appear favourable, however these ANNs do not show an advantage over an MLP network for the prediction of joint moments. If real-time joint angle and joint moment prediction is desirable an LSTM network should be utilised.
Publisher: IEEE
Date: 07-2017
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 11-2007
Publisher: Elsevier BV
Date: 12-2019
Publisher: IEEE
Date: 02-2007
Publisher: IEEE
Date: 09-10-2022
Publisher: Elsevier BV
Date: 03-2018
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 11-2023
Publisher: Springer International Publishing
Date: 2018
Publisher: IEEE
Date: 03-2014
Publisher: Wiley
Date: 03-2015
DOI: 10.1111/PHOR.12091
Publisher: IEEE
Date: 03-2014
Publisher: Springer Science and Business Media LLC
Date: 06-08-2019
Publisher: Elsevier BV
Date: 12-2016
Publisher: IEEE
Date: 06-2006
Publisher: MDPI AG
Date: 14-05-2023
DOI: 10.3390/S23104746
Abstract: Smart metering systems (SMSs) have been widely used by industrial users and residential customers for purposes such as real-time tracking, outage notification, quality monitoring, load forecasting, etc. However, the consumption data it generates can violate customers’ privacy through absence detection or behavior recognition. Homomorphic encryption (HE) has emerged as one of the most promising methods to protect data privacy based on its security guarantees and computability over encrypted data. However, SMSs have various application scenarios in practice. Consequently, we used the concept of trust boundaries to help design HE solutions for privacy protection under these different scenarios of SMSs. This paper proposes a privacy-preserving framework as a systematic privacy protection solution for SMSs by implementing HE with trust boundaries for various SMS scenarios. To show the feasibility of the proposed HE framework, we evaluated its performance on two computation metrics, summation and variance, which are often used for billing, usage predictions, and other related tasks. The security parameter set was chosen to provide a security level of 128 bits. In terms of performance, the aforementioned metrics could be computed in 58,235 ms for summation and 127,423 ms for variance, given a s le size of 100 households. These results indicate that the proposed HE framework can protect customer privacy under varying trust boundary scenarios in SMS. The computational overhead is acceptable from a cost–benefit perspective while ensuring data privacy.
Publisher: IEEE
Date: 2005
Publisher: IEEE
Date: 06-2014
DOI: 10.1109/CVPR.2014.23
Publisher: Springer International Publishing
Date: 2014
Publisher: The Optical Society
Date: 06-2015
DOI: 10.1364/OE.23.015160
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 10-2012
Publisher: IEEE
Date: 08-2013
Publisher: IEEE
Date: 06-2015
Publisher: Springer Berlin Heidelberg
Date: 2009
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 09-2018
Publisher: IEEE
Date: 09-2009
Publisher: Elsevier BV
Date: 06-2023
Publisher: Elsevier BV
Date: 03-2008
Publisher: The Optical Society
Date: 04-04-2011
DOI: 10.1364/OE.19.007491
Publisher: IEEE
Date: 2011
Publisher: MDPI AG
Date: 04-12-2020
DOI: 10.3390/S20236941
Abstract: Detecting key frames in videos is a common problem in many applications such as video classification, action recognition and video summarization. These tasks can be performed more efficiently using only a handful of key frames rather than the full video. Existing key frame detection approaches are mostly designed for supervised learning and require manual labelling of key frames in a large corpus of training data to train the models. Labelling requires human annotators from different backgrounds to annotate key frames in videos which is not only expensive and time consuming but also prone to subjective errors and inconsistencies between the labelers. To overcome these problems, we propose an automatic self-supervised method for detecting key frames in a video. Our method comprises a two-stream ConvNet and a novel automatic annotation architecture able to reliably annotate key frames in a video for self-supervised learning of the ConvNet. The proposed ConvNet learns deep appearance and motion features to detect frames that are unique. The trained network is then able to detect key frames in test videos. Extensive experiments on UCF101 human action and video summarization VSUMM datasets demonstrates the effectiveness of our proposed method.
Publisher: IEEE
Date: 2005
Publisher: Springer Science and Business Media LLC
Date: 24-08-2017
DOI: 10.1038/S41598-017-09939-Y
Abstract: Elevated prenatal testosterone exposure has been associated with Autism Spectrum Disorder (ASD) and facial masculinity. By employing three-dimensional (3D) photogrammetry, the current study investigated whether prepubescent boys and girls with ASD present increased facial masculinity compared to typically-developing controls. There were two phases to this research. 3D facial images were obtained from a normative s le of 48 boys and 53 girls (3.01–12.44 years old) to determine typical facial masculinity/femininity. The sexually dimorphic features were used to create a continuous ‘gender score’, indexing degree of facial masculinity. Gender scores based on 3D facial images were then compared for 54 autistic and 54 control boys (3.01–12.52 years old), and also for 20 autistic and 60 control girls (4.24–11.78 years). For each sex, increased facial masculinity was observed in the ASD group relative to control group. Further analyses revealed that increased facial masculinity in the ASD group correlated with more social-communication difficulties based on the Social Affect score derived from the Autism Diagnostic Observation Scale-Generic (ADOS-G). There was no association between facial masculinity and the derived Restricted and Repetitive Behaviours score. This is the first study demonstrating facial hypermasculinisation in ASD and its relationship to social-communication difficulties in prepubescent children.
Publisher: Springer Berlin Heidelberg
Date: 2006
DOI: 10.1007/11744078_27
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2022
Publisher: Springer Science and Business Media LLC
Date: 04-2011
Publisher: IEEE
Date: 03-2014
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2018
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2023
Publisher: Springer International Publishing
Date: 2014
Publisher: IEEE
Date: 08-2014
Publisher: IEEE
Date: 06-2018
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2023
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 09-2023
Publisher: IEEE
Date: 06-2020
Publisher: IEEE
Date: 12-2021
Publisher: IEEE
Date: 11-2008
Publisher: IEEE
Date: 06-2022
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2021
Publisher: ACM
Date: 21-02-2023
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 08-2023
Publisher: IEEE
Date: 2004
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 06-2023
Publisher: IEEE
Date: 11-2016
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2019
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2020
Publisher: IEEE
Date: 08-2015
Publisher: IEEE
Date: 03-2014
Publisher: Elsevier BV
Date: 02-2022
DOI: 10.1016/J.IJOM.2021.04.002
Abstract: The purpose of this study was to determine whether the use of custom osteosynthesis plates increased the accuracy of proximal segment position following bilateral sagittal split osteotomy in a cohort of 30 patients when compared to a control group of 25 patients who had surgery with conventional plates. Surgery was performed by a single surgeon between October 2015 and December 2017. Post-surgical cone beam computed tomography scans were segmented using Mimics Innovation Suite (Materialise NV), and surface-based superimposition was achieved using ProPlan CMF (Materialise NV). However, there was a tendency for the rotational error to be smaller in the custom group than in the control group. The root mean square error in both groups and for all variables fell within clinical parameters of 2 mm and 4°. In conclusion, the results of this study indicate that customized mandibular fixation plates do not necessarily improve the accuracy of the proximal segments post-surgically however they may be of benefit in in idual patients.
Publisher: Wiley
Date: 21-06-2019
DOI: 10.1002/AUR.2161
Abstract: A key research priority in the study of autism spectrum conditions (ASC) is the discovery of biological markers that may help to identify and elucidate etiologically distinct subgroups. One physical marker that has received increasing research attention is facial structure. Although there remains little consensus in the field, findings relating to greater facial asymmetry (FA) in ASC exhibit some consistency. As there is growing recognition of the importance of replicatory studies in ASC research, the aim of this study was to investigate the replicability of increased FA in autistic children compared to nonautistic peers. Using three-dimensional photogrammetry, this study examined FA in 84 autistic children, 110 typically developing children with no family history of the condition, and 49 full siblings of autistic children. In support of previous literature, significantly greater depth-wise FA was identified in autistic children relative to the two comparison groups. As a further investigation, increased lateral FA in autistic children was found to be associated with greater severity of ASC symptoms on the Autism Diagnostic Observation Schedule, second edition, specifically related to repetitive and restrictive behaviors. These outcomes provide an important and independent replication of increased FA in ASC, as well as a novel contribution to the field. Having confirmed the direction and areas of increased FA in ASC, these findings could motivate a search for potential underlying brain dysmorphogenesis. Autism Res 2019, 12: 1774-1783. © 2019 International Society for Autism Research, Wiley Periodicals, Inc. LAY SUMMARY: This study looked at the amount of facial asymmetry (FA) in autistic children compared to typically developing children and children who have siblings with autism. The study found that autistic children, compared to the other two groups, had greater FA, and that increased FA was related to greater severity of autistic symptoms. The face and brain grow together during the earliest stages of development, and so findings of facial differences in autism might inform future studies of early brain differences associated with the condition.
Publisher: Oxford University Press (OUP)
Date: 27-02-2017
Abstract: Underwater stereo–video systems are widely used for counting and measuring fish in aquaculture, fisheries, and conservation management. Length measurements are generated from stereo–video recordings by a software operator using a mouse to locate the head and tail of a fish in synchronized pairs of images. This data can be used to compare spatial and temporal changes in the mean length and biomass or frequency distributions of populations of fishes. Since the early 1990s stereo–video has also been used for measuring the lengths of fish in aquaculture for quota and farm management. However, the costs of the equipment, software, the time, and salary costs involved in post processing imagery manually and the subsequent delays in the availability of length information inhibit the adoption of this technology. We present a semi-automatic method for capturing stereo–video measurements to estimate the lengths of fish. We compare the time taken to make measurements of the same fish measured manually from stereo–video imagery to that measured semi-automatically. Using imagery recorded during transfers of Southern Bluefin Tuna (SBT) from tow cages to grow out cages, we demonstrate that the semi-automatic algorithm developed can obtain fork length measurements with an error of less than 1% of the true length and with at least a sixfold reduction in operator time in comparison to manual measurements. Of the 22 138 SBT recorded we were able to measure 52.6% (11 647) manually and 11.8% (2614) semi-automatically. For seven of the eight cage transfers recorde,d there were no statistical differences in the mean length, weight, or length frequency between manual and semi-automatic measurements. When the data were pooled across the eight cage transfers, there was no statistical difference in mean length or weight between the stereo–video-based manual and semi-automated measurements. Hence, the presented semi-automatic system can be deployed to significantly reduce the cost involved in adoption of stereo–video technology.
Publisher: Elsevier BV
Date: 04-2021
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 03-2015
Publisher: Springer International Publishing
Date: 2020
Publisher: Springer Berlin Heidelberg
Date: 2008
Publisher: Elsevier BV
Date: 05-2017
Publisher: Springer International Publishing
Date: 2015
Publisher: Elsevier BV
Date: 11-2015
Publisher: Springer Science and Business Media LLC
Date: 05-07-2019
Publisher: Association for Computing Machinery (ACM)
Date: 16-10-2019
DOI: 10.1145/3355390
Abstract: Video description is the automatic generation of natural language sentences that describe the contents of a given video. It has applications in human-robot interaction, helping the visually impaired and video subtitling. The past few years have seen a surge of research in this area due to the unprecedented success of deep learning in computer vision and natural language processing. Numerous methods, datasets, and evaluation metrics have been proposed in the literature, calling the need for a comprehensive survey to focus research efforts in this flourishing new direction. This article fills the gap by surveying the state-of-the-art approaches with a focus on deep learning models comparing benchmark datasets in terms of their domains, number of classes, and repository size and identifying the pros and cons of various evaluation metrics, such as SPICE, CIDEr, ROUGE, BLEU, METEOR, and WMD. Classical video description approaches combined subject, object, and verb detection with template-based language models to generate sentences. However, the release of large datasets revealed that these methods cannot cope with the ersity in unconstrained open domain videos. Classical approaches were followed by a very short era of statistical methods that were soon replaced with deep learning, the current state-of-the-art in video description. Our survey shows that despite the fast-paced developments, video description research is still in its infancy due to the following reasons: Analysis of video description models is challenging, because it is difficult to ascertain the contributions towards accuracy or errors of the visual features and the adopted language model in the final description. Existing datasets neither contain adequate visual ersity nor complexity of linguistic structures. Finally, current evaluation metrics fall short of measuring the agreement between machine-generated descriptions with that of humans. We conclude our survey by listing promising future research directions.
Publisher: Springer Science and Business Media LLC
Date: 04-06-2016
Publisher: Springer Science and Business Media LLC
Date: 09-12-2018
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 03-2018
Publisher: IEEE
Date: 06-2021
Publisher: Wiley
Date: 08-2021
DOI: 10.14814/PHY2.14996
Publisher: IEEE
Date: 04-2018
Publisher: Elsevier BV
Date: 03-2023
Publisher: IEEE
Date: 11-2021
Publisher: Elsevier BV
Date: 11-2020
Publisher: Springer Science and Business Media LLC
Date: 15-04-2015
Publisher: Elsevier BV
Date: 08-2019
DOI: 10.1016/J.JBIOMECH.2019.07.002
Abstract: In sports analytics, an understanding of accurate on-field 3D knee joint moments (KJM) could provide an early warning system for athlete workload exposure and knee injury risk. Traditionally, this analysis has relied on captive laboratory force plates and associated downstream biomechanical modeling, and many researchers have approached the problem of portability by extrapolating models built on linear statistics. An alternative approach would be to capitalize on recent advances in deep learning. In this study, using the pre-trained CaffeNet convolutional neural network (CNN) model, multivariate regression of marker-based motion capture to 3D KJM for three sports-related movement types were compared. The strongest overall mean correlation to source modeling of 0.8895 was achieved over the initial 33% of stance phase for sidestepping. The accuracy of these mean predictions of the three critical KJM associated with anterior cruciate ligament (ACL) injury demonstrate the feasibility of on-field knee injury assessment using deep learning in lieu of laboratory embedded force plates. This multidisciplinary research approach significantly advances machine representation of real-world physical models with practical application for both community and professional level athletes.
Publisher: Elsevier BV
Date: 06-2023
Publisher: Elsevier BV
Date: 05-2020
Publisher: IEEE
Date: 18-07-2021
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2023
Publisher: IEEE
Date: 11-2015
Publisher: IEEE
Date: 08-2014
Publisher: Springer International Publishing
Date: 2016
Publisher: Springer Berlin Heidelberg
Date: 2007
Publisher: IEEE
Date: 11-2013
Publisher: Springer Berlin Heidelberg
Date: 2006
DOI: 10.1007/11919476_10
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2023
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2023
Publisher: British Machine Vision Association
Date: 2013
DOI: 10.5244/C.27.57
Publisher: The Optical Society
Date: 24-04-2012
DOI: 10.1364/OE.20.010658
Publisher: Oxford University Press (OUP)
Date: 04-07-2017
Abstract: There is a need for automatic systems that can reliably detect, track and classify fish and other marine species in underwater videos without human intervention. Conventional computer vision techniques do not perform well in underwater conditions where the background is complex and the shape and textural features of fish are subtle. Data-driven classification models like neural networks require a huge amount of labelled data, otherwise they tend to over-fit to the training data and fail on unseen test data which is not involved in training. We present a state-of-the-art computer vision method for fine-grained fish species classification based on deep learning techniques. A cross-layer pooling algorithm using a pre-trained Convolutional Neural Network as a generalized feature detector is proposed, thus avoiding the need for a large amount of training data. Classification on test data is performed by a SVM on the features computed through the proposed method, resulting in classification accuracy of 94.3% for fish species from typical underwater video imagery captured off the coast of Western Australia. This research advocates that the development of automated classification systems which can identify fish from underwater video imagery is feasible and a cost-effective alternative to manual identification by humans.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 10-2019
Publisher: Springer Nature Switzerland
Date: 2023
Publisher: IEEE
Date: 08-2014
Publisher: Springer International Publishing
Date: 2014
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2023
Publisher: Wiley
Date: 18-04-2023
DOI: 10.1111/ANAE.16024
Abstract: Myocardial injury due to ischaemia within 30 days of non‐cardiac surgery is prognostically relevant. We aimed to determine the discrimination, calibration, accuracy, sensitivity and specificity of single‐layer and multiple‐layer neural networks for myocardial injury and death within 30 postoperative days. We analysed data from 24,589 participants in the Vascular Events in Non‐cardiac Surgery Patients Cohort Evaluation study. Validation was performed on a randomly selected subset of the study population. Discrimination for myocardial injury by single‐layer vs. multiple‐layer models generated areas (95%CI) under the receiver operating characteristic curve of: 0.70 (0.69–0.72) vs. 0.71 (0.70–0.73) with variables available before surgical referral, p 0.001 0.73 (0.72–0.75) vs. 0.75 (0.74–0.76) with additional variables available on admission, but before surgery, p 0.001 and 0.76 (0.75–0.77) vs. 0.77 (0.76–0.78) with the addition of subsequent variables, p 0.001. Discrimination for death by single‐layer vs. multiple‐layer models generated areas (95%CI) under the receiver operating characteristic curve of: 0.71 (0.66–0.76) vs. 0.74 (0.71–0.77) with variables available before surgical referral, p = 0.04 0.78 (0.73–0.82) vs. 0.83 (0.79–0.86) with additional variables available on admission but before surgery, p = 0.01 and 0.87 (0.83–0.89) vs. 0.87 (0.85–0.90) with the addition of subsequent variables, p = 0.52. The accuracy of the multiple‐layer model for myocardial injury and death with all variables was 70% and 89%, respectively.
Publisher: Association for the Advancement of Artificial Intelligence (AAAI)
Date: 26-06-2023
DOI: 10.1609/AAAI.V37I12.26733
Abstract: Contrastive self-supervised learning (CSL) has managed to match or surpass the performance of supervised learning in image and video classification. However, it is still largely unknown if the nature of the representations induced by the two learning paradigms is similar. We investigate this under the lens of adversarial robustness. Our analysis of the problem reveals that CSL has intrinsically higher sensitivity to perturbations over supervised learning. We identify the uniform distribution of data representation over a unit hypersphere in the CSL representation space as the key contributor to this phenomenon. We establish that this is a result of the presence of false negative pairs in the training process, which increases model sensitivity to input perturbations. Our finding is supported by extensive experiments for image and video classification using adversarial perturbations and other input corruptions. We devise a strategy to detect and remove false negative pairs that is simple, yet effective in improving model robustness with CSL training. We close up to 68% of the robustness gap between CSL and its supervised counterpart. Finally, we contribute to adversarial learning by incorporating our method in CSL. We demonstrate an average gain of about 5% over two different state-of-the-art methods in this domain.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 04-2015
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 10-2022
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 05-2023
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2018
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2023
Publisher: Springer Berlin Heidelberg
Date: 2009
Publisher: Emerald
Date: 06-2004
DOI: 10.1108/02602280410525995
Abstract: In this paper, we review the process of “3D modeling” and “model‐based recognition” along with their potential industrial applications. We put a particular emphasis on the case scenario of robot grasp analysis for which 3D model‐based object recognition seems to be a more palpable choice compared with the conventional tactile sensors solutions. We also put a particular emphasis on the main challenges in the areas of 3D modeling and model‐based recognition and give a brief literature review of the latest research that was carried out to respond to these challenges.
Publisher: IEEE
Date: 2004
Publisher: Springer Science and Business Media LLC
Date: 2006
Start Date: 2016
End Date: 2018
Funder: Australian Research Council
View Funded ActivityStart Date: 2016
End Date: 2016
Funder: Australian Research Council
View Funded ActivityStart Date: 2011
End Date: 12-2015
Amount: $724,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 06-2010
End Date: 12-2015
Amount: $390,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 06-2016
End Date: 06-2016
Amount: $250,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 07-2022
End Date: 06-2025
Amount: $303,161.00
Funder: Australian Research Council
View Funded ActivityStart Date: 2016
End Date: 10-2019
Amount: $293,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 07-2021
End Date: 06-2026
Amount: $5,000,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 01-2008
End Date: 06-2011
Amount: $311,298.00
Funder: Australian Research Council
View Funded ActivityStart Date: 03-2022
End Date: 03-2026
Amount: $1,126,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 05-2012
End Date: 02-2016
Amount: $436,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 04-2016
End Date: 12-2017
Amount: $250,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 2019
End Date: 12-2023
Amount: $426,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 03-2019
End Date: 03-2024
Amount: $5,000,000.00
Funder: Australian Research Council
View Funded Activity