ORCID Profile
0000-0001-9346-4848
Current Organisation
Sirindhorn International Institute of Technology, Thammasat University
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
Publisher: MDPI AG
Date: 02-10-2023
DOI: 10.3390/RS15194804
Publisher: MDPI AG
Date: 12-2022
DOI: 10.3390/RS14236095
Abstract: Floods and droughts cause catastrophic damage in paddy fields, and farmers need to be compensated for their loss. Mobile applications have allowed farmers to claim losses by providing mobile photos and polygons of their land plots drawn on satellite base maps. This paper studies erse methods to verify those claims at a parcel level by employing (i) Normalized Difference Vegetation Index (NDVI) and (ii) Normalized Difference Water Index (NDWI) on Sentinel-2A images, (iii) Classification and Regression Tree (CART) on Sentinel-1 SAR GRD images, and (iv) a convolutional neural network (CNN) on mobile photos. To address the disturbance from clouds, we study the combination of multi-modal methods—NDVI+CNN and NDWI+CNN—that allow 86.21% and 83.79% accuracy in flood detection and 73.40% and 81.91% in drought detection, respectively. The SAR-based method outperforms the other methods in terms of accuracy in flood (98.77%) and drought (99.44%) detection, data acquisition, parcel coverage, cloud disturbance, and observing the area proportion of disasters in the field. The experiments conclude that the method of CART on SAR images is the most reliable to verify farmers’ claims for compensation. In addition, the CNN-based method’s performance on mobile photos is adequate, providing an alternative for the CART method in the case of data unavailability while using SAR images.
Publisher: Copernicus GmbH
Date: 14-10-2022
DOI: 10.5194/ISPRS-ANNALS-X-4-W3-2022-181-2022
Abstract: Abstract. An accurate detection, classification, and tracking of vehicles are highly important for intelligent transport systems (ITS) and road maintenance. In recent years, the deep learning (DL)-based approach is highly regarded for real-time vehicle classification from surveillance cameras. However, the practical implementation of such an approach is affected by the adverse lighting conditions and positioning of the cameras. In this research, we develop a DL-based method for near real-time multi-vehicle counting, classifying, and tracking on in idual lanes of the road. First, we train a DL network of the You Only Look Once (YOLO) family on a custom dataset that we have curated. The dataset consists of nearly 30000 training s les to classify the vehicles into seven classes, which is more than in the existing benchmark datasets. Second, we fine-tune the trained model into another small dataset collected from the surveillance cameras that are used during the implementation process. Third, we connect the trained model to a tracking algorithm that we have developed to produce a per-lane report with the calculation of the speed and mobility of the vehicles. We test the robustness of the system on different faces of the vehicles and in adverse lighting conditions. The overall accuracy (OA) of classification ranges from 91% to 99% in four faces of vehicles (back, front, driver side, and passenger side). Similarly, in an experiment on adverse lighting conditions, OA of 93.7% and 99.6% is observed in a noisy and clear lighting conditions respectively. The implications of these results will assist in road maintenance with spatial information management and sensing for intelligent transport planning.
Publisher: IEEE
Date: 07-2022
Location: Thailand
Location: No location found
No related grants have been discovered for Aakash Thapa.