ORCID Profile
0000-0002-0318-4496
Current Organisation
Universiti Putra Malaysia
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
Publisher: IEEE
Date: 08-2014
Publisher: Elsevier BV
Date: 2015
Publisher: IEEE
Date: 2009
Publisher: MDPI AG
Date: 20-06-2019
DOI: 10.3390/RS11121461
Abstract: In recent years, remote sensing researchers have investigated the use of different modalities (or combinations of modalities) for classification tasks. Such modalities can be extracted via a erse range of sensors and images. Currently, there are no (or only a few) studies that have been done to increase the land cover classification accuracy via unmanned aerial vehicle (UAV)–digital surface model (DSM) fused datasets. Therefore, this study looks at improving the accuracy of these datasets by exploiting convolutional neural networks (CNNs). In this work, we focus on the fusion of DSM and UAV images for land use/land cover mapping via classification into seven classes: bare land, buildings, dense vegetation/trees, grassland, paved roads, shadows, and water bodies. Specifically, we investigated the effectiveness of the two datasets with the aim of inspecting whether the fused DSM yields remarkable outcomes for land cover classification. The datasets were: (i) only orthomosaic image data (Red, Green and Blue channel data), and (ii) a fusion of the orthomosaic image and DSM data, where the final classification was performed using a CNN. CNN, as a classification method, is promising due to hierarchical learning structure, regulating and weight sharing with respect to training data, generalization, optimization and parameters reduction, automatic feature extraction and robust discrimination ability with high performance. The experimental results show that a CNN trained on the fused dataset obtains better results with Kappa index of ~0.98, an average accuracy of 0.97 and final overall accuracy of 0.98. Comparing accuracies between the CNN with DSM result and the CNN without DSM result for the overall accuracy, average accuracy and Kappa index revealed an improvement of 1.2%, 1.8% and 1.5%, respectively. Accordingly, adding the heights of features such as buildings and trees improved the differentiation between vegetation specifically where plants were dense.
Publisher: Author(s)
Date: 2017
DOI: 10.1063/1.5005456
Publisher: MDPI AG
Date: 05-05-2022
DOI: 10.3390/RS14092214
Abstract: Building damage maps can be generated from either optical or Light Detection and Ranging (Lidar) datasets. In the wake of a disaster such as an earthquake, a timely and detailed map is a critical reference for disaster teams in order to plan and perform rescue and evacuation missions. Recent studies have shown that, instead of being used in idually, optical and Lidar data can potentially be fused to obtain greater detail. In this study, we explore this fusion potential, which incorporates deep learning. The overall framework involves a novel End-to-End convolutional neural network (CNN) that performs building damage detection. Specifically, our building damage detection network (BDD-Net) utilizes three deep feature streams (through a multi-scale residual depth-wise convolution block) that are fused at different levels of the network. This is unlike other fusion networks that only perform fusion at the first and the last levels. The performance of BDD-Net is evaluated under three different phases, using optical and Lidar datasets for the 2010 Haiti Earthquake. The three main phases are: (1) data preprocessing and building footprint extraction based on building vector maps, (2) s le data preparation and data augmentation, and (3) model optimization and building damage map generation. The results of building damage detection in two scenarios show that fusing the optical and Lidar datasets significantly improves building damage map generation, with an overall accuracy (OA) greater than 88%.
Publisher: Elsevier BV
Date: 2014
Publisher: Springer International Publishing
Date: 2020
Publisher: Springer International Publishing
Date: 2017
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 09-2017
Publisher: IEEE
Date: 10-2016
Publisher: IEEE
Date: 12-2008
Publisher: IEEE
Date: 10-2015
Publisher: MDPI AG
Date: 28-05-2020
DOI: 10.3390/RS12111737
Abstract: Predicting landslide occurrences can be difficult. However, failure to do so can be catastrophic, causing unwanted tragedies such as property damage, community displacement, and human casualties. Research into landslide susceptibility mapping (LSM) attempts to alleviate such catastrophes through the identification of landslide prone areas. Computational modelling techniques have been successful in related disaster scenarios, which motivate this work to explore such modelling for LSM. In this research, the potential of supervised machine learning and ensemble learning is investigated. Firstly, the Flexible Discriminant Analysis (FDA) supervised learning algorithm is trained for LSM and compared against other algorithms that have been widely used for the same purpose, namely Generalized Logistic Models (GLM), Boosted Regression Trees (BRT or GBM), and Random Forest (RF). Next, an ensemble model consisting of all four algorithms is implemented to examine possible performance improvements. The dataset used to train and test all the algorithms consists of a landslide inventory map of 227 landslide locations. From these sources, 13 conditioning factors are extracted to be used in the models. Experimental evaluations are made based on True Skill Statistic (TSS), the Receiver Operation characteristic (ROC) curve and kappa index. The results show that the best TSS (0.6986), ROC (0.904) and kappa (0.6915) were obtained by the ensemble model. FDA on its own seems effective at modelling landslide susceptibility from multiple data sources, with performance comparable to GLM. However, it slightly underperforms when compared to GBM (BRT) and RF. RF seems most capable compared to GBM, GLM, and FDA, when dealing with all conditioning factors.
Publisher: IEEE
Date: 08-2008
DOI: 10.1109/CGIV.2008.34
Publisher: MDPI AG
Date: 12-10-2021
Abstract: According to the Food Wastage Footprint and Climate Change Report, about 15% of all fruits and 25% of all vegetables are wasted at the base of the food production chain. The significant losses and wastes in the fresh and processing industries is becoming a serious environmental issue, mainly due to the microbial degradation impacts. There has been a recent surge in research and innovation related to food, packaging, and pharmaceutical applications to address these problems. The underutilized wastes (seed, skin, rind, and pomace) potentially present good sources of valuable bioactive compounds, including functional nutrients, amylopectin, phytochemicals, vitamins, enzymes, dietary fibers, and oils. Fruit and vegetable wastes (FVW) are rich in nutrients and extra nutritional compounds that contribute to the development of animal feed, bioactive ingredients, and ethanol production. In the development of active packaging films, pectin and other biopolymers are commonly used. In addition, the most recent research studies dealing with FVW have enhanced the physical, mechanical, antioxidant, and antimicrobial properties of packaging and biocomposite systems. Innovative technologies that can be used for sensitive bioactive compound extraction and fortification will be crucial in valorizing FVW completely thus, this article aims to report the progress made in terms of the valorization of FVW and to emphasize the applications of FVW in active packaging and biocomposites, their by-products, and the innovative technologies (both thermal and non-thermal) that can be used for bioactive compounds extraction.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2021
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2023
Publisher: AICIT
Date: 31-08-2012
Publisher: Elsevier BV
Date: 08-2015
Publisher: IEEE
Date: 10-2013
Publisher: AICIT
Date: 30-11-2011
Publisher: Global Vision Press
Date: 31-10-2015
Publisher: MDPI AG
Date: 28-10-2020
DOI: 10.3390/RS12213529
Abstract: In recent years, remote-sensing (RS) technologies have been used together with image processing and traditional techniques in various disaster-related works. Among these is detecting building damage from orthophoto imagery that was inflicted by earthquakes. Automatic and visual techniques are considered as typical methods to produce building damage maps using RS images. The visual technique, however, is time-consuming due to manual s ling. The automatic method is able to detect the damaged building by extracting the defect features. However, various design methods and widely changing real-world conditions, such as shadow and light changes, cause challenges to the extensive appointing of automatic methods. As a potential solution for such challenges, this research proposes the adaption of deep learning (DL), specifically convolutional neural networks (CNN), which has a high ability to learn features automatically, to identify damaged buildings from pre- and post-event RS imageries. Since RS data revolves around imagery, CNNs can arguably be most effective at automatically discovering relevant features, avoiding the need for feature engineering based on expert knowledge. In this work, we focus on RS imageries from orthophoto imageries for damaged-building detection, specifically for (i) background, (ii) no damage, (iii) minor damage, and (iv) debris classifications. The gist is to uncover the CNN architecture that will work best for this purpose. To this end, three CNN models, namely the twin model, fusion model, and composite model, are applied to the pre- and post-orthophoto imageries collected from the 2016 Kumamoto earthquake, Japan. The robustness of the models was evaluated using four evaluation metrics, namely overall accuracy (OA), producer accuracy (PA), user accuracy (UA), and F1 score. According to the obtained results, the twin model achieved higher accuracy (OA = 76.86% F1 score = 0.761) compare to the fusion model (OA = 72.27% F1 score = 0.714) and composite (OA = 69.24% F1 score = 0.682) models.
Publisher: MDPI AG
Date: 25-08-2021
DOI: 10.3390/SU13179571
Abstract: In this paper, we assess the extent of environmental pollution in terms of PM2.5 particulate matter and noise in Tikrit University, located in Tikrit City of Iraq. The geographic information systems (GIS) technology was used for data analysis. Moreover, we built two multiple linear regression models (based on two different data inputs) for the prediction of PM2.5 particulate matter, which were based on the explanatory variables of maximum and minimum noise, temperature, and humidity. Furthermore, the maximum prediction coefficient R2 of the best models was 0.82, with a validated (via testing data) coefficient R2 of 0.94. From the actual total distribution of PM2.5 particulate values ranging from 35–58 μg/m3, our best model managed to predict values between 34.9–60.6 μg/m3. At the end of the study, the overall air quality was determined between moderate and harmful. In addition, the overall detected noise ranged from 49.30–85.79 dB, which inevitably designated the study area to be categorized as a noisy zone, despite being an educational institution.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2022
Publisher: Hindawi Limited
Date: 21-02-2022
DOI: 10.1155/2022/8044390
Abstract: Forest conservation is crucial for the maintenance of a healthy and thriving ecosystem. The field of remote sensing (RS) has been integral with the wide adoption of computer vision and sensor technologies for forest land observation. One critical area of interest is the detection of active forest fires. A forest fire, which occurs naturally or manually induced, can quickly sweep through vast amounts of land, leaving behind unfathomable damage and loss of lives. Automatic detection of active forest fires (and burning biomass) is hence an important area to pursue to avoid unwanted catastrophes. Early fire detection can also be useful for decision makers to plan mitigation strategies as well as extinguishing efforts. In this paper, we present a deep learning framework called Fire-Net, that is trained on Landsat-8 imagery for the detection of active fires and burning biomass. Specifically, we fuse the optical (Red, Green, and Blue) and thermal modalities from the images for a more effective representation. In addition, our network leverages the residual convolution and separable convolution blocks, enabling deeper features from coarse datasets to be extracted. Experimental results show an overall accuracy of 97.35%, while also being able to robustly detect small active fires. The imagery for this study is taken from Australian and North American forests regions, the Amazon rainforest, Central Africa and Chernobyl (Ukraine), where forest fires are actively reported.
Publisher: Science Publications
Date: 06-2013
Publisher: Elsevier BV
Date: 11-2016
DOI: 10.1016/J.AAP.2016.04.013
Abstract: Motorcyclists are particularly vulnerable to injury in crashes with heavy vehicles due to substantial differences in vehicle mass, the degree of protection and speed. There is a considerable difference in height between motorcycles and trucks motorcycles are viewed by truck drivers from downward angles, and shorter distances between them mean steeper downward angles. Hence, we anticipated that the effects of motorcycle conspicuity treatments would be different for truck drivers. Therefore, this study aims to evaluate the effects of motorcycle conspicuity treatments on the identification and detection of motorcycles by truck drivers. Two complementary experiments were performed the first experiment assessed the impact of motorcycle sensory conspicuity on the ability of un-alerted truck drivers to detect motorcycles, and the second experiment assessed the motorcycle cognitive conspicuity to alerted truck drivers. The sensory conspicuity was measured in terms of motorcycle detection rates by un-alerted truck drivers when they were not anticipating a motorcycle within a realistic driving scene, while the cognitive conspicuity was determined by the time taken by alerted truck drivers to actively search for a motorcycle. In the first experiment, the participants were presented with 10 pictures and were instructed to report the kinds of vehicles that were presented in the pictures. Each picture was shown to the participants for 600ms. In the second experiment, the participants were presented with the same set of pictures and were instructed to respond by clicking the right button on a mouse as soon as they detected a motorcycle in the picture. The results indicate that the motorcycle detection rate increases, and the response time to search for a motorcycle decreases, as the distance between the targeted motorcycle and the viewer decreases. This is true regardless of the type of conspicuity treatment used. The use of daytime running headlights (DRH) was found to increase the detection rate and the identification of a motorcycle by a truck driver at a farther distance, but effect deteriorates as the distance decreases. The results show that the detection rate and the identification of a motorcyclist wearing a black helmet with a reflective sticker increases as the distance between the motorcycle and the truck decreases. We also found that a motorcyclist wearing a white helmet and a white outfit is more identifiable and detectable at both shorter and longer distances. In conclusion, although this study provides evidence that the use of appropriate conspicuity treatments enhances motorcycle conspicuity to truck drivers, we suggest that more attention should be paid to the effect of background environment on motorcycle conspicuity.
Publisher: IEEE
Date: 11-2011
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2022
Location: Malaysia
No related grants have been discovered for Alfian Abdul Halin.