ORCID Profile
0000-0003-4176-2215
Current Organisation
Deakin University
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
Publisher: IEEE
Date: 05-2010
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 14-08-2023
DOI: 10.36227/TECHRXIV.23896986
Abstract: Video super-resolution (VSR) is a prominent research topic in low-level computer vision, where deep learning technologies have played a significant role. The rapid progress in deep learning and its applications in VSR has led to a proliferation of tools and techniques in the literature. However, the usage of these methods is often not adequately explained, and decisions are primarily driven by quantitative improvements. Given the significance of VSR's potential influence across multiple domains, it is imperative to conduct a comprehensive analysis of the elements and deep learning methodologies employed in VSR research. This methodical analysis will facilitate the informed development of models tailored to specific application needs. In this paper, we present a comprehensive overview of deep learning-based video super-resolution models, investigating each component and discussing its implications. Furthermore, we provide a synopsis of key components and technologies employed by state-of-the-art and earlier VSR models. By elucidating the underlying methodologies and categorising them systematically, we identified trends, requirements, and challenges in the domain. As a first-of-its-kind comprehensive overview of deep learning-based VSR models, this work also establishes a multi-level taxonomy to guide current and future VSR research, enhancing the maturation and interpretation of VSR practices for various practical applications.
Publisher: SPIE
Date: 13-03-2021
DOI: 10.1117/12.2590406
Publisher: IEEE
Date: 07-2013
Publisher: IEEE
Date: 05-2013
Publisher: MDPI AG
Date: 26-06-2023
DOI: 10.20944/PREPRINTS202306.1738.V1
Abstract: The use of visual signals in horticulture has attracted significant attention and encompassed a wide range of data types such as 2D images, videos, hyperspectral images, and 3D point clouds. These visual signals have proven to be valuable in developing cutting-edge computer vision systems for various applications in horticulture, enabling plant growth monitoring, pest and disease detection, quality and yield estimation, and automated harvesting. However, unlike other sectors, developing deep learning computer vision systems for horticulture encounters unique challenges due to the limited availability of high-quality training and evaluation datasets necessary for deep learning models. This paper investigates the current status of vision systems and available data in order to identify the high-quality data requirements specific to horticultural applications. We analyse the impact of the quality of visual signals on the information content and features that can be extracted from these signals. To address the identified data quality requirements, we explore the usage of a deep learning-based super-resolution model for generative quality enhancement of visual signals. Furthermore, we discuss how these can be applied to meet the growing requirements around data quality for learning-based vision systems. We also present a detailed analysis of the competitive quality generated by the proposed solution compared to cost-intensive hardware-based alternatives. This work aims to guide the development of efficient computer vision models in horticulture by overcoming existing data challenges and paving a pathway forward for contemporary data acquisition.
Publisher: IEEE
Date: 05-2017
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 10-2017
Publisher: SPIE-Intl Soc Optical Eng
Date: 07-2011
DOI: 10.1117/1.3605574
Publisher: MDPI AG
Date: 05-06-2023
DOI: 10.3390/APP13116854
Abstract: Digital transformation, characterised by advanced digitalisation, blockchain, the Internet of Things, artificial intelligence, machine learning technologies, and robotics, has played a key role in revolutionising various industries, especially the healthcare sector. The adoption of and transition (from traditional) to new technology will bring challenges, opportunities, and disruptions to existing healthcare systems. According to the European Union, we must pursue both digital and green transitions to achieve sustainable, human-centric, and resilient industries to achieve a world of prosperity for all. The study aims to present a novel approach to education and training in the digital health field that is inspired by the fifth industrial revolution paradigm. The paper highlights the role of training and education interventions that are required to support digital health in the future so that students can develop the capacity to recognise and exploit the potential of new technologies. This article will briefly discuss the challenges and opportunities related to healthcare systems in the era of digital transformation and beyond. Then, we look at the enabling technologies from an Industry 5.0 perspective that supports digital health. Finally, we present a new teaching and learning paradigm and strategies that embed Industry 5.0 technologies in academic curricula so that students can develop their capacities to embrace a digital future and minimise the disruption that will inevitably accompany it. By incorporating Industry 5.0 principles into digital health education, we believe students can gain a deeper understanding of the industry and develop skills that will enable them to deliver a more efficient, effective, and sustainable healthcare system.
Publisher: IEEE
Date: 07-2017
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 17-05-2023
DOI: 10.36227/TECHRXIV.21500235
Abstract: Recurrent Neural Networks (RNN) are widespread for Video Super-Resolution (VSR) because of their proven ability to learn spatiotemporal inter-dependencies across the temporal dimension. Despite RNN’s ability to propagate memory across longer sequences of frames, vanishing gradient and error accumulation remain major obstacles to unidirectional RNNs in VSR. Several bi-directional recurrent models are suggested in the literature to alleviate this issue however, these models are only applicable to offline use cases due to heavy demands for computational resources and the number of frames required per input. This paper proposes a novel unidirectional recurrent model for VSR, namely “Replenished Recurrency with Dual-Duct” (R2D2), that can be used in an online application setting. R2D2 incorporates a recurrent architecture with a sliding-window-based local alignment resulting in a recurrent hybrid architecture. It also uses a dual-duct residual network for concurrent and mutual refinement of local features along with global memory for full utilisation of the information available at each timest . With novel modelling and sophisticated optimisation, R2D2 demonstrates competitive performance and efficiency despite the lack of information available at each time-st compared to its offline (bi-directional) counterparts. Ablation analysis confirms the additive benefits of the proposed sub-components of R2D2 over baseline RNN models.The PyTorch-based code for the R2D2 model will be released at R2D2 GitRepo.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 11-04-2023
DOI: 10.36227/TECHRXIV.20494851
Abstract: Omnidirectional Videos (or 360° videos) are widely used in Virtual Reality (VR) to facilitate immersive and interactive viewing experiences. However, the limited spatial resolution in 360° videos does not allow for each degree of view to be represented with adequate pixels, limiting the visual quality offered in the immersive experience. Deep learning Video Super-Resolution (VSR) techniques used for conventional videos could provide a promising software-based solution however, these techniques do not tackle the distortion present in equirectangular projections of 360° video signals. An additional obstacle is the limited 360° video datasets to study. To address these issues, this paper creates a novel 360° Video Dataset (360VDS) with a study of the extensibility of conventional VSR models to 360° videos. This paper further proposes a novel deep learning model for 360° Video Super-Resolution (360° VSR), called Spherical Signal Super-resolution with a Proportioned Optimisation (S3PO). S3PO adopts recurrent modelling with an attention mechanism, unbound from conventional VSR techniques like alignment. With a purpose-built feature extractor and a novel loss function addressing spherical distortion, S3PO outperforms most state-of-the-art conventional VSR models and 360° specific super-resolution models on 360° video datasets. A step-wise ablation study is presented to understand and demonstrate the impact of the chosen architectural subcomponents, targeted training and optimisation.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2023
Publisher: Elsevier BV
Date: 11-2010
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 12-04-2023
DOI: 10.36227/TECHRXIV.22591426
Abstract: This paper presents a novel approach to video super-resolution (VSR) by focusing on the selection of input frames, a process critical to VSR. VSR methods typically rely on deep learning techniques, those that are able to learn features from a large dataset of low-resolution (LR) and corresponding high-resolution (HR) videos and generate high-quality HR frames from any new LR input frames using the learned features. However, these methods often use as input the immediate neighbouring frames to a given target frame without considering the importance and dynamics of the frames across the temporal dimension of a video. This work aims to address the limitations of the conventional sliding-window mechanisms by developing input frame selection algorithms. By dynamically selecting the most representative neighbouring frames based on content-aware selection measures, our proposed algorithms enable VSR models to extract more informative and accurate features that are better aligned with the target frame, leading to improved performance and higher-quality HR frames. Through an empirical study, we demonstrate that the proposed dynamic content-aware selection mechanism improves super-resolution results without any additional architectural overhead, offering a counter-intuitive yet effective alternative to the long-established trend of increasing architectural complexity to improve VSR results.
Publisher: IEEE
Date: 08-2014
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 10-11-2022
DOI: 10.36227/TECHRXIV.21500235.V1
Abstract: This is an original research article entitled “Online Video Super-Resolution using Unidirectional Recurrent Model”. Considering the critical constraints around video frames and resource availability in an online setting, this paper presents a new unidirectional video super-resolution (VSR) model with a recurrent architecture specifically designed for online applications. Many recent works in the video super-resolution domain focus on improving the super-resolution quality at the cost of computationally intense and input-heavy bidirectional modelling. To alleviate these drawbacks, we propose the Replenished Recurrency with Dual-Duct (R2D2) model which adopts unidirectional architecture to fully utilise local features and global memory available at each timest . The two variants – R2D2 and R2D2-lite presented in the paper generate state-of-the-art super-resolution quality at significantly optimised efficiency. This is believed an important step forward in real-world applications-inspired research in the video super-resolution domain.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 11-04-2023
DOI: 10.36227/TECHRXIV.20494851.V2
Abstract: Omnidirectional Videos (or 360° videos) are widely used in Virtual Reality (VR) to facilitate immersive and interactive viewing experiences. However, the limited spatial resolution in 360° videos does not allow for each degree of view to be represented with adequate pixels, limiting the visual quality offered in the immersive experience. Deep learning Video Super-Resolution (VSR) techniques used for conventional videos could provide a promising software-based solution however, these techniques do not tackle the distortion present in equirectangular projections of 360° video signals. An additional obstacle is the limited 360° video datasets to study. To address these issues, this paper creates a novel 360° Video Dataset (360VDS) with a study of the extensibility of conventional VSR models to 360° videos. This paper further proposes a novel deep learning model for 360° Video Super-Resolution (360° VSR), called Spherical Signal Super-resolution with a Proportioned Optimisation (S3PO). S3PO adopts recurrent modelling with an attention mechanism, unbound from conventional VSR techniques like alignment. With a purpose-built feature extractor and a novel loss function addressing spherical distortion, S3PO outperforms most state-of-the-art conventional VSR models and 360° specific super-resolution models on 360° video datasets. A step-wise ablation study is presented to understand and demonstrate the impact of the chosen architectural subcomponents, targeted training and optimisation.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 17-05-2023
DOI: 10.36227/TECHRXIV.21500235.V2
Abstract: Recurrent Neural Networks (RNN) are widespread for Video Super-Resolution (VSR) because of their proven ability to learn spatiotemporal inter-dependencies across the temporal dimension. Despite RNN’s ability to propagate memory across longer sequences of frames, vanishing gradient and error accumulation remain major obstacles to unidirectional RNNs in VSR. Several bi-directional recurrent models are suggested in the literature to alleviate this issue however, these models are only applicable to offline use cases due to heavy demands for computational resources and the number of frames required per input. This paper proposes a novel unidirectional recurrent model for VSR, namely “Replenished Recurrency with Dual-Duct” (R2D2), that can be used in an online application setting. R2D2 incorporates a recurrent architecture with a sliding-window-based local alignment resulting in a recurrent hybrid architecture. It also uses a dual-duct residual network for concurrent and mutual refinement of local features along with global memory for full utilisation of the information available at each timest . With novel modelling and sophisticated optimisation, R2D2 demonstrates competitive performance and efficiency despite the lack of information available at each time-st compared to its offline (bi-directional) counterparts. Ablation analysis confirms the additive benefits of the proposed sub-components of R2D2 over baseline RNN models.The PyTorch-based code for the R2D2 model will be released at R2D2 GitRepo.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 14-08-2023
DOI: 10.36227/TECHRXIV.23896986.V1
Abstract: Video super-resolution (VSR) is a prominent research topic in low-level computer vision, where deep learning technologies have played a significant role. The rapid progress in deep learning and its applications in VSR has led to a proliferation of tools and techniques in the literature. However, the usage of these methods is often not adequately explained, and decisions are primarily driven by quantitative improvements. Given the significance of VSR's potential influence across multiple domains, it is imperative to conduct a comprehensive analysis of the elements and deep learning methodologies employed in VSR research. This methodical analysis will facilitate the informed development of models tailored to specific application needs. In this paper, we present a comprehensive overview of deep learning-based video super-resolution models, investigating each component and discussing its implications. Furthermore, we provide a synopsis of key components and technologies employed by state-of-the-art and earlier VSR models. By elucidating the underlying methodologies and categorising them systematically, we identified trends, requirements, and challenges in the domain. As a first-of-its-kind comprehensive overview of deep learning-based VSR models, this work also establishes a multi-level taxonomy to guide current and future VSR research, enhancing the maturation and interpretation of VSR practices for various practical applications.
Publisher: Institution of Engineering and Technology (IET)
Date: 06-2016
DOI: 10.1049/EL.2016.0261
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 12-04-2023
DOI: 10.36227/TECHRXIV.22591426.V1
Abstract: This paper presents a novel approach to video super-resolution (VSR) by focusing on the selection of input frames, a process critical to VSR. VSR methods typically rely on deep learning techniques, those that are able to learn features from a large dataset of low-resolution (LR) and corresponding high-resolution (HR) videos and generate high-quality HR frames from any new LR input frames using the learned features. However, these methods often use as input the immediate neighbouring frames to a given target frame without considering the importance and dynamics of the frames across the temporal dimension of a video. This work aims to address the limitations of the conventional sliding-window mechanisms by developing input frame selection algorithms. By dynamically selecting the most representative neighbouring frames based on content-aware selection measures, our proposed algorithms enable VSR models to extract more informative and accurate features that are better aligned with the target frame, leading to improved performance and higher-quality HR frames. Through an empirical study, we demonstrate that the proposed dynamic content-aware selection mechanism improves super-resolution results without any additional architectural overhead, offering a counter-intuitive yet effective alternative to the long-established trend of increasing architectural complexity to improve VSR results.
Publisher: Institution of Engineering and Technology
Date: 2013
DOI: 10.1049/IC.2013.0004
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 23-08-2022
DOI: 10.36227/TECHRXIV.20494851.V1
Abstract: Omnidirectional Videos (or 360° videos) are widely used in Virtual Reality (VR) to facilitate immersive and interactive viewing experiences. However, the limited spatial resolution in 360° videos does not allow for each degree of view to be represented with adequate pixels, limiting the visual quality offered in the immersive experience. Deep learning Video Super-Resolution (VSR) techniques used for conventional videos could provide a promising software-based solution however, these techniques do not tackle the distortion present in equirecentagular projections of 360° video signals. An additional obstacle is the limited 360° video datasets to study. To address these issues, this paper creates a novel 360° Video Dataset (360VDS) with a study of the extensibility of conventional VSR models to 360° videos. This paper further proposes a novel deep learning model for 360° Video Super-Resolution (360° VSR), called Spherical Signal Super-resolution with Proportioned Optimisation (S3PO). S3PO adopts recurrent modelling with attention mechanism, unbound from conventional VSR techniques like alignment. With a purpose built feature extractor and a novel loss function addressing spherical distortion, S3PO outperforms most state-of-the-art conventional VSR models and 360° specific super-resolution models on 360° video datasets. A step-wise ablation study is presented to understand and demonstrate the impact of the chosen architectural sub-components, targeted training and optimisation.
Publisher: Elsevier BV
Date: 11-2013
No related grants have been discovered for Tsz-Kwan Lee.