ORCID Profile
0000-0002-3694-4703
Current Organisation
Université de Haute-Alsace
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
Publisher: Springer Science and Business Media LLC
Date: 07-09-2020
Publisher: Springer International Publishing
Date: 2017
Publisher: IEEE
Date: 11-2017
Publisher: MDPI AG
Date: 27-12-2022
DOI: 10.3390/RS15010151
Abstract: In the context of global change, up-to-date land use land cover (LULC) maps is a major challenge to assess pressures on natural areas. These maps also allow us to assess the evolution of land cover and to quantify changes over time (such as urban sprawl), which is essential for having a precise understanding of a given territory. Few studies have combined information from Sentinel-1 and Sentinel-2 imagery, but merging radar and optical imagery has been shown to have several benefits for a range of study cases, such as semantic segmentation or classification. For this study, we used a newly produced dataset, MultiSenGE, which provides a set of multitemporal and multimodal patches over the Grand-Est region in France. To merge these data, we propose a CNN approach based on spatio-temporal and spatio-spectral feature fusion, ConvLSTM+Inception-S1S2. We used a U-Net base model and ConvLSTM extractor for spatio-temporal features and an inception module for the spatio-spectral features extractor. The results show that describing an overrepresented class is preferable to map urban fabrics (UF). Furthermore, the addition of an Inception module on a date allowing the extraction of spatio-spectral features improves the classification results. Spatio-spectro-temporal method (ConvLSTM+Inception-S1S2) achieves higher global weighted F1Score than all other methods tested.
Publisher: Springer International Publishing
Date: 2018
Publisher: Springer Science and Business Media LLC
Date: 02-03-2019
Publisher: IEEE
Date: 12-2018
Publisher: Springer Science and Business Media LLC
Date: 28-05-2019
Publisher: MDPI
Date: 21-06-2022
Publisher: IEEE
Date: 07-2012
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 06-2014
Publisher: Elsevier BV
Date: 09-2018
DOI: 10.1016/J.ARTMED.2018.08.002
Abstract: The analysis of surgical motion has received a growing interest with the development of devices allowing their automatic capture. In this context, the use of advanced surgical training systems makes an automated assessment of surgical trainee possible. Automatic and quantitative evaluation of surgical skills is a very important step in improving surgical patient care. In this paper, we present an approach for the discovery and ranking of discriminative and interpretable patterns of surgical practice from recordings of surgical motions. A pattern is defined as a series of actions or events in the kinematic data that together are distinctive of a specific gesture or skill level. Our approach is based on the decomposition of continuous kinematic data into a set of overlapping gestures represented by strings (bag of words) for which we compute comparative numerical statistic (tf-idf) enabling the discriminative gesture discovery via its relative occurrence frequency. We carried out experiments on three surgical motion datasets. The results show that the patterns identified by the proposed method can be used to accurately classify in idual gestures, skill levels and surgical interfaces. We also present how the patterns provide a detailed feedback on the trainee skill assessment. The proposed approach is an interesting addition to existing learning tools for surgery as it provides a way to obtain a feedback on which parts of an exercise have been used to classify the attempt as correct or incorrect.
Publisher: Informa UK Limited
Date: 19-03-2022
Publisher: Springer Science and Business Media LLC
Date: 16-10-2021
No related grants have been discovered for Jonathan Weber.