ORCID Profile
0000-0002-9132-3571
Current Organisation
University of Luxembourg
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
Publisher: Copernicus GmbH
Date: 25-02-2022
DOI: 10.5194/ISPRS-ARCHIVES-XLVI-2-W1-2022-401-2022
Abstract: Abstract. Most deep learning (DL) methods that are not end-to-end use several multi-scale and multi-type hand-crafted features that make the network challenging, more computationally intensive and vulnerable to overfitting. Furthermore, reliance on empirically-based feature dimensionality reduction may lead to misclassification. In contrast, efficient feature management can reduce storage and computational complexities, builds better classifiers, and improves overall performance. Principal Component Analysis (PCA) is a well-known dimension reduction technique that has been used for feature extraction. This paper presents a two-step PCA based feature extraction algorithm that employs a variant of feature-based PointNet (Qi et al., 2017a) for point cloud classification. This paper extends the PointNet framework for use on large-scale aerial LiDAR data, and contributes by (i) developing a new feature extraction algorithm, (ii) exploring the impact of dimensionality reduction in feature extraction, and (iii) introducing a non-end-to-end PointNet variant for per point classification in point clouds. This is demonstrated on aerial laser scanning (ALS) point clouds. The algorithm successfully reduces the dimension of the feature space without sacrificing performance, as benchmarked against the original PointNet algorithm. When tested on the well-known Vaihingen data set, the proposed algorithm achieves an Overall Accuracy (OA) of 74.64% by using 9 input vectors and 14 shape features, whereas with the same 9 input vectors and only 5PCs (principal components built by the 14 shape features) it actually achieves a higher OA of 75.36% which demonstrates the effect of efficient dimensionality reduction.
Publisher: Copernicus GmbH
Date: 30-05-2022
DOI: 10.5194/ISPRS-ARCHIVES-XLIII-B1-2022-59-2022
Abstract: Abstract. Road surface extraction is crucial for 3D city analysis. Mobile laser scanning (MLS) is the most appropriate data acquisition system for the road environment because of its efficient vehicle-based on-road scanning opportunity. Many methods are available for road pavement, curb and roadside way extraction. Most of them use classical approaches that do not mitigate problems caused by the presence of noise and outliers. In practice, however, laser scanning point clouds are not free from noise and outliers, and it is apparent that the presence of a very small portion of outliers and noise can produce unreliable and non-robust results. A road surface usually consists of three key parts: road pavement, curb and roadside way. This paper investigates the problem of road surface extraction in the presence of noise and outliers, and proposes a robust algorithm for road pavement, curb, road ider/islands, and roadside way extraction using MLS point clouds. The proposed algorithm employs robust statistical approaches to remove the consequences of the presence of noise and outliers. It consists of five sequential steps for road ground and non-ground surface separation, and road related components determination. Demonstration on two different MLS data sets shows that the new algorithm is efficient for road surface extraction and for classifying road pavement, curb, road ider/island and roadside way. The success can be rated in one experiment in this paper, where we extract curb points the results achieve 97.28%, 100% and 0.986 of precision, recall and Matthews correlation coefficient, respectively.
Publisher: American Geophysical Union (AGU)
Date: 05-2013
DOI: 10.1002/JGRB.50152
Publisher: Copernicus GmbH
Date: 02-12-2022
DOI: 10.5194/ISPRS-ARCHIVES-XLVIII-4-W3-2022-111-2022
Abstract: Abstract. This study investigates the inability of two popular data splitting techniques: train/test split and k-fold cross-validation that are to create training and validation data sets, and to achieve sufficient generality for supervised deep learning (DL) methods. This failure is mainly caused by their limited ability of new data creation. In response, the bootstrap is a computer based statistical res ling method that has been used efficiently for estimating the distribution of a s le estimator and to assess a model without having knowledge about the population. This paper couples cross-validation and bootstrap to have their respective advantages in view of data generation strategy and to achieve better generalization of a DL model. This paper contributes by: (i) developing an algorithm for better selection of training and validation data sets, (ii) exploring the potential of bootstrap for drawing statistical inference on the necessary performance metrics (e.g., mean square error), and (iii) introducing a method that can assess and improve the efficiency of a DL model. The proposed method is applied for semantic segmentation and is demonstrated via a DL based classification algorithm, PointNet, through aerial laser scanning point cloud data.
Publisher: Copernicus GmbH
Date: 27-10-2022
DOI: 10.5194/ISPRS-ARCHIVES-XLVIII-3-W2-2022-43-2022
Abstract: Abstract. The building footprint is crucial for a volumetric 3D representation of a building that is applied in urban planning, 3D city modeling, cadastral and topographic map generation. Aerial laser scanning (ALS) has been recognized as the most suitable means of large-scale 3D point cloud data (PCD) acquisition. PCD can produce geometric detail of a scanned surface. However, it is almost impossible to get point clouds without noise and outliers. Besides, data incompleteness and occlusions are two common phenomena for PCD. Most of the existing methods for building footprint extraction employ classification, segmentation, voting techniques (e.g., Hough-Transform or RANSAC), or Principal Component Analysis (PCA) based methods. It is known that classical PCA is highly sensitive to outliers, even RANSAC which is known as a robust technique for shape detection is not free from outlier effects. This paper presents a novel algorithm that employs MCMD (maximum consistency within minimum distance), MSAC (a robust variant of RANSAC) and a robust regression to extract reliable building footprints in the presence of outliers, missing points and irregular data distributions. The algorithm is successfully demonstrated through two sets of ALS PCD.
Publisher: MDPI AG
Date: 28-03-2020
DOI: 10.3390/SU12072670
Abstract: This article is the second part of a two-part study, which explored the extent to which Building Information Modelling (BIM) is used for End-of-Lifecycle (EoL) scenario selection to minimise the Construction and Demolition Waste (CDW). The conventional literature review presented here is based on the conceptual landscape that was obtained from the bibliometric and scientometric analysis in the first part of the study. Seven main academic research directions concerning the BIM-based EoL domain were found, including social and cultural factors, BIM-based Design for Deconstruction (DfD), BIM-based deconstruction, BIM-based EoL within LCA, BIM-aided waste management, Material and Component Banks (M/C Banks), off-site construction, interoperability and Industry Foundation Classes (IFC). The analysis highlights research gaps in the path of raw materials to reusable materials, i.e., from the deconstruction to M/C banks to DfD-based designs and then again to deconstruction. BIM-based EoL is suffering from a lack of a global framework. The existing solutions are based on local waste management policies and case-specific sustainability criteria selection. Another drawback of these ad hoc but well-developed BIM-based EoL prototypes is their use of specific proprietary BIM tools to support their framework. This disconnection between BIM tools and EoL tools is reportedly hindering the BIM-based EoL, while no IFC classes support the EoL phase information exchange.
Publisher: Copernicus GmbH
Date: 30-05-2022
DOI: 10.5194/ISPRS-ARCHIVES-XLIII-B2-2022-617-2022
Abstract: Abstract. A validation data set plays a pivotal role in tweaking a machine learning model trained in a supervised manner. Many existing algorithms select a part of available data by using random s ling to produce a validation set. However, this approach can be prone to overfitting. One should follow careful data splitting to have reliable training and validation sets that can produce a generalized model with a good performance for the unseen (test) data. Data splitting based on res ling techniques involves repeatedly drawing s les from the available data. Hence, res ling methods can give better generalization power to a model, because they can produce and use many training and/or validation sets. These techniques are computationally expensive, but with increasingly available high-performance computing facilities, one can exploit them. Though a multitude of res ling methods exist, investigation of their influence on the generality of deep learning (DL) algorithms is limited due to its non-linear black-box nature. This paper contributes by: (1) investigating the generalization capability of the four most popular res ling methods: k-fold cross-validation (k-CV), repeated k-CV (Rk-CV), Monte Carlo CV (MC-CV) and bootstrap for creating training and validation data sets used for developing, training and validating DL based point cloud classifiers (e.g., PointNet Qi et al., 2017a), (2) justifying Mean Square Error (MSE) as a statistically consistent estimator, and (3) exploring the use of MSE as a reliable performance metric for supervised DL. Experiments in this paper are performed on both synthetic and real-world aerial laser scanning (ALS) point clouds.
Publisher: Copernicus GmbH
Date: 28-06-2021
DOI: 10.5194/ISPRS-ARCHIVES-XLIII-B1-2021-31-2021
Abstract: Abstract. Ground surface extraction is one of the classic tasks in airborne laser scanning (ALS) point cloud processing that is used for three-dimensional (3D) city modelling, infrastructure health monitoring, and disaster management. Many methods have been developed over the last three decades. Recently, Deep Learning (DL) has become the most dominant technique for 3D point cloud classification. DL methods used for classification can be categorized into end-to-end and non end-to-end approaches. One of the main challenges of using supervised DL approaches is getting a sufficient amount of training data. The main advantage of using a supervised non end-to-end approach is that it requires less training data. This paper introduces a novel local feature-based non end-to-end DL algorithm that generates a binary classifier for ground point filtering. It studies feature relevance, and investigates three models that are different combinations of features. This method is free from the limitations of point clouds’ irregular data structure and varying data density, which is the biggest challenge for using the elegant convolutional neural network. The new algorithm does not require transforming data into regular 3D voxel grids or any rasterization. The performance of the new method has been demonstrated through two ALS datasets covering urban environments. The method successfully labels ground and non-ground points in the presence of steep slopes and height discontinuity in the terrain. Experiments in this paper show that the algorithm achieves around 97% in both F1-score and model accuracy for ground point labelling.
Publisher: Copernicus GmbH
Date: 23-12-2021
DOI: 10.5194/ISPRS-ARCHIVES-XLVI-4-W5-2021-397-2021
Abstract: Abstract. Semantic segmentation of point clouds is indispensable for 3D scene understanding. Point clouds have credibility for capturing geometry of objects including shape, size, and orientation. Deep learning (DL) has been recognized as the most successful approach for image semantic segmentation. Applied to point clouds, performance of the many DL algorithms degrades, because point clouds are often sparse and have irregular data format. As a result, point clouds are regularly first transformed into voxel grids or image collections. PointNet was the first promising algorithm that feeds point clouds directly into the DL architecture. Although PointNet achieved remarkable performance on indoor point clouds, its performance has not been extensively studied in large-scale outdoor point clouds. So far, we know, no study on large-scale aerial point clouds investigates the sensitivity of the hyper-parameters used in the PointNet. This paper evaluates PointNet’s performance for semantic segmentation through three large-scale Airborne Laser Scanning (ALS) point clouds of urban environments. Reported results show that PointNet has potential in large-scale outdoor scene semantic segmentation. A remarkable limitation of PointNet is that it does not consider local structure induced by the metric space made by its local neighbors. Experiments exhibit PointNet is expressively sensitive to the hyper-parameters like batch-size, block partition and the number of points in a block. For an ALS dataset, we get significant difference between overall accuracies of 67.5% and 72.8%, for the block sizes of 5m × 5m and 10m × 10m, respectively. Results also discover that the performance of PointNet depends on the selection of input vectors.
Location: United Kingdom of Great Britain and Northern Ireland
Location: United Kingdom of Great Britain and Northern Ireland
No related grants have been discovered for Felix Norman Teferle.