ORCID Profile
0000-0002-9359-6506
Current Organisations
Department of Energy, Environment and Climate Action
,
Deakin University
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 14-08-2023
DOI: 10.36227/TECHRXIV.23896986
Abstract: Video super-resolution (VSR) is a prominent research topic in low-level computer vision, where deep learning technologies have played a significant role. The rapid progress in deep learning and its applications in VSR has led to a proliferation of tools and techniques in the literature. However, the usage of these methods is often not adequately explained, and decisions are primarily driven by quantitative improvements. Given the significance of VSR's potential influence across multiple domains, it is imperative to conduct a comprehensive analysis of the elements and deep learning methodologies employed in VSR research. This methodical analysis will facilitate the informed development of models tailored to specific application needs. In this paper, we present a comprehensive overview of deep learning-based video super-resolution models, investigating each component and discussing its implications. Furthermore, we provide a synopsis of key components and technologies employed by state-of-the-art and earlier VSR models. By elucidating the underlying methodologies and categorising them systematically, we identified trends, requirements, and challenges in the domain. As a first-of-its-kind comprehensive overview of deep learning-based VSR models, this work also establishes a multi-level taxonomy to guide current and future VSR research, enhancing the maturation and interpretation of VSR practices for various practical applications.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 17-05-2023
DOI: 10.36227/TECHRXIV.21500235
Abstract: Recurrent Neural Networks (RNN) are widespread for Video Super-Resolution (VSR) because of their proven ability to learn spatiotemporal inter-dependencies across the temporal dimension. Despite RNN’s ability to propagate memory across longer sequences of frames, vanishing gradient and error accumulation remain major obstacles to unidirectional RNNs in VSR. Several bi-directional recurrent models are suggested in the literature to alleviate this issue however, these models are only applicable to offline use cases due to heavy demands for computational resources and the number of frames required per input. This paper proposes a novel unidirectional recurrent model for VSR, namely “Replenished Recurrency with Dual-Duct” (R2D2), that can be used in an online application setting. R2D2 incorporates a recurrent architecture with a sliding-window-based local alignment resulting in a recurrent hybrid architecture. It also uses a dual-duct residual network for concurrent and mutual refinement of local features along with global memory for full utilisation of the information available at each timest . With novel modelling and sophisticated optimisation, R2D2 demonstrates competitive performance and efficiency despite the lack of information available at each time-st compared to its offline (bi-directional) counterparts. Ablation analysis confirms the additive benefits of the proposed sub-components of R2D2 over baseline RNN models.The PyTorch-based code for the R2D2 model will be released at R2D2 GitRepo.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 11-04-2023
DOI: 10.36227/TECHRXIV.20494851
Abstract: Omnidirectional Videos (or 360° videos) are widely used in Virtual Reality (VR) to facilitate immersive and interactive viewing experiences. However, the limited spatial resolution in 360° videos does not allow for each degree of view to be represented with adequate pixels, limiting the visual quality offered in the immersive experience. Deep learning Video Super-Resolution (VSR) techniques used for conventional videos could provide a promising software-based solution however, these techniques do not tackle the distortion present in equirectangular projections of 360° video signals. An additional obstacle is the limited 360° video datasets to study. To address these issues, this paper creates a novel 360° Video Dataset (360VDS) with a study of the extensibility of conventional VSR models to 360° videos. This paper further proposes a novel deep learning model for 360° Video Super-Resolution (360° VSR), called Spherical Signal Super-resolution with a Proportioned Optimisation (S3PO). S3PO adopts recurrent modelling with an attention mechanism, unbound from conventional VSR techniques like alignment. With a purpose-built feature extractor and a novel loss function addressing spherical distortion, S3PO outperforms most state-of-the-art conventional VSR models and 360° specific super-resolution models on 360° video datasets. A step-wise ablation study is presented to understand and demonstrate the impact of the chosen architectural subcomponents, targeted training and optimisation.
Publisher: MDPI AG
Date: 08-09-2023
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2023
Publisher: MDPI AG
Date: 26-06-2023
DOI: 10.20944/PREPRINTS202306.1738.V1
Abstract: The use of visual signals in horticulture has attracted significant attention and encompassed a wide range of data types such as 2D images, videos, hyperspectral images, and 3D point clouds. These visual signals have proven to be valuable in developing cutting-edge computer vision systems for various applications in horticulture, enabling plant growth monitoring, pest and disease detection, quality and yield estimation, and automated harvesting. However, unlike other sectors, developing deep learning computer vision systems for horticulture encounters unique challenges due to the limited availability of high-quality training and evaluation datasets necessary for deep learning models. This paper investigates the current status of vision systems and available data in order to identify the high-quality data requirements specific to horticultural applications. We analyse the impact of the quality of visual signals on the information content and features that can be extracted from these signals. To address the identified data quality requirements, we explore the usage of a deep learning-based super-resolution model for generative quality enhancement of visual signals. Furthermore, we discuss how these can be applied to meet the growing requirements around data quality for learning-based vision systems. We also present a detailed analysis of the competitive quality generated by the proposed solution compared to cost-intensive hardware-based alternatives. This work aims to guide the development of efficient computer vision models in horticulture by overcoming existing data challenges and paving a pathway forward for contemporary data acquisition.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 12-04-2023
DOI: 10.36227/TECHRXIV.22591426
Abstract: This paper presents a novel approach to video super-resolution (VSR) by focusing on the selection of input frames, a process critical to VSR. VSR methods typically rely on deep learning techniques, those that are able to learn features from a large dataset of low-resolution (LR) and corresponding high-resolution (HR) videos and generate high-quality HR frames from any new LR input frames using the learned features. However, these methods often use as input the immediate neighbouring frames to a given target frame without considering the importance and dynamics of the frames across the temporal dimension of a video. This work aims to address the limitations of the conventional sliding-window mechanisms by developing input frame selection algorithms. By dynamically selecting the most representative neighbouring frames based on content-aware selection measures, our proposed algorithms enable VSR models to extract more informative and accurate features that are better aligned with the target frame, leading to improved performance and higher-quality HR frames. Through an empirical study, we demonstrate that the proposed dynamic content-aware selection mechanism improves super-resolution results without any additional architectural overhead, offering a counter-intuitive yet effective alternative to the long-established trend of increasing architectural complexity to improve VSR results.
Publisher: MDPI AG
Date: 21-10-2020
DOI: 10.20944/PREPRINTS202010.0429.V1
Abstract: Research has shown the multitude of applications that IoT, cloud computing and forecast technologies present in every sector. In agriculture, one application is the monitoring of factors that influence crop development to assist in making crop management decisions. Research on the application of such technologies in agriculture has been mainly conducted at small experimental sites or under controlled conditions. This research has provided relevant insights and guidelines for the use of different types of sensors, application of a multitude of algorithms to forecast relevant parameters as well as architectural approaches of IoT platforms. However, research on the implementation of IoT platforms at the commercial scale is needed to identify platform requirements to properly function under such conditions. This article evaluates an IoT platform (IRRISENS) based on fully replicable microservices used to sense soil, crop and atmosphere parameters, interact with third party cloud services, planning and scheduling irrigation as well as control of irrigation water control devices. The proposed IoT platform was evaluated during one growing season at four commercial scale farms on two different broadacre irrigated crops with very different water management requirements (rice and cotton). Five main requirements for IoT platforms to be used in agriculture at commercial scale were identified from implementing IRRISENS in rice and cotton production: scalability, flexibility, heterogeneity, robustness to failure and security. The platform addressed all these requirements. The results showed that the microservice approach followed in the platform is robust against both intermittent and critical failures in the field that could occur in any of the monitored sites. Further, processing or storage overload caused for any reason at one farm did not affect the performance of the platform regarding the other monitored farms. This paper also discusses how the microservice approach can address the data heterogeneity issue when crops with different management requirements are monitored. Since there are no shared microservices among farms, the IoT platform proposed here also provides data isolation maintaining data confidentiality for each user, which is relevant in a commercial farm scenario.
Publisher: SCITEPRESS - Science and Technology Publications
Date: 2022
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 10-11-2022
DOI: 10.36227/TECHRXIV.21500235.V1
Abstract: This is an original research article entitled “Online Video Super-Resolution using Unidirectional Recurrent Model”. Considering the critical constraints around video frames and resource availability in an online setting, this paper presents a new unidirectional video super-resolution (VSR) model with a recurrent architecture specifically designed for online applications. Many recent works in the video super-resolution domain focus on improving the super-resolution quality at the cost of computationally intense and input-heavy bidirectional modelling. To alleviate these drawbacks, we propose the Replenished Recurrency with Dual-Duct (R2D2) model which adopts unidirectional architecture to fully utilise local features and global memory available at each timest . The two variants – R2D2 and R2D2-lite presented in the paper generate state-of-the-art super-resolution quality at significantly optimised efficiency. This is believed an important step forward in real-world applications-inspired research in the video super-resolution domain.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 11-04-2023
DOI: 10.36227/TECHRXIV.20494851.V2
Abstract: Omnidirectional Videos (or 360° videos) are widely used in Virtual Reality (VR) to facilitate immersive and interactive viewing experiences. However, the limited spatial resolution in 360° videos does not allow for each degree of view to be represented with adequate pixels, limiting the visual quality offered in the immersive experience. Deep learning Video Super-Resolution (VSR) techniques used for conventional videos could provide a promising software-based solution however, these techniques do not tackle the distortion present in equirectangular projections of 360° video signals. An additional obstacle is the limited 360° video datasets to study. To address these issues, this paper creates a novel 360° Video Dataset (360VDS) with a study of the extensibility of conventional VSR models to 360° videos. This paper further proposes a novel deep learning model for 360° Video Super-Resolution (360° VSR), called Spherical Signal Super-resolution with a Proportioned Optimisation (S3PO). S3PO adopts recurrent modelling with an attention mechanism, unbound from conventional VSR techniques like alignment. With a purpose-built feature extractor and a novel loss function addressing spherical distortion, S3PO outperforms most state-of-the-art conventional VSR models and 360° specific super-resolution models on 360° video datasets. A step-wise ablation study is presented to understand and demonstrate the impact of the chosen architectural subcomponents, targeted training and optimisation.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 17-05-2023
DOI: 10.36227/TECHRXIV.21500235.V2
Abstract: Recurrent Neural Networks (RNN) are widespread for Video Super-Resolution (VSR) because of their proven ability to learn spatiotemporal inter-dependencies across the temporal dimension. Despite RNN’s ability to propagate memory across longer sequences of frames, vanishing gradient and error accumulation remain major obstacles to unidirectional RNNs in VSR. Several bi-directional recurrent models are suggested in the literature to alleviate this issue however, these models are only applicable to offline use cases due to heavy demands for computational resources and the number of frames required per input. This paper proposes a novel unidirectional recurrent model for VSR, namely “Replenished Recurrency with Dual-Duct” (R2D2), that can be used in an online application setting. R2D2 incorporates a recurrent architecture with a sliding-window-based local alignment resulting in a recurrent hybrid architecture. It also uses a dual-duct residual network for concurrent and mutual refinement of local features along with global memory for full utilisation of the information available at each timest . With novel modelling and sophisticated optimisation, R2D2 demonstrates competitive performance and efficiency despite the lack of information available at each time-st compared to its offline (bi-directional) counterparts. Ablation analysis confirms the additive benefits of the proposed sub-components of R2D2 over baseline RNN models.The PyTorch-based code for the R2D2 model will be released at R2D2 GitRepo.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 14-08-2023
DOI: 10.36227/TECHRXIV.23896986.V1
Abstract: Video super-resolution (VSR) is a prominent research topic in low-level computer vision, where deep learning technologies have played a significant role. The rapid progress in deep learning and its applications in VSR has led to a proliferation of tools and techniques in the literature. However, the usage of these methods is often not adequately explained, and decisions are primarily driven by quantitative improvements. Given the significance of VSR's potential influence across multiple domains, it is imperative to conduct a comprehensive analysis of the elements and deep learning methodologies employed in VSR research. This methodical analysis will facilitate the informed development of models tailored to specific application needs. In this paper, we present a comprehensive overview of deep learning-based video super-resolution models, investigating each component and discussing its implications. Furthermore, we provide a synopsis of key components and technologies employed by state-of-the-art and earlier VSR models. By elucidating the underlying methodologies and categorising them systematically, we identified trends, requirements, and challenges in the domain. As a first-of-its-kind comprehensive overview of deep learning-based VSR models, this work also establishes a multi-level taxonomy to guide current and future VSR research, enhancing the maturation and interpretation of VSR practices for various practical applications.
Publisher: MDPI AG
Date: 14-12-2020
DOI: 10.3390/S20247163
Abstract: Research has shown the multitude of applications that Internet of Things (IoT), cloud computing, and forecast technologies present in every sector. In agriculture, one application is the monitoring of factors that influence crop development to assist in making crop management decisions. Research on the application of such technologies in agriculture has been mainly conducted at small experimental sites or under controlled conditions. This research has provided relevant insights and guidelines for the use of different types of sensors, application of a multitude of algorithms to forecast relevant parameters as well as architectural approaches of IoT platforms. However, research on the implementation of IoT platforms at the commercial scale is needed to identify platform requirements to properly function under such conditions. This article evaluates an IoT platform (IRRISENS) based on fully replicable microservices used to sense soil, crop, and atmosphere parameters, interact with third-party cloud services for scheduling irrigation and, potentially, control irrigation automatically. The proposed IoT platform was evaluated during one growing season at four commercial-scale farms on two broadacre irrigated crops with very different water management requirements (rice and cotton). Five main requirements for IoT platforms to be used in agriculture at commercial scale were identified from implementing IRRISENS as an irrigation support tool for rice and cotton production: scalability, flexibility, heterogeneity, robustness to failure, and security. The platform addressed all these requirements. The results showed that the microservice-based approach used is robust against both intermittent and critical failures in the field that could occur in any of the monitored sites. Further, processing or storage overload caused by datalogger malfunctioning or other reasons at one farm did not affect the platform’s performance. The platform was able to deal with different types of data heterogeneity. Since there are no shared microservices among farms, the IoT platform proposed here also provides data isolation, maintaining data confidentiality for each user, which is relevant in a commercial farm scenario.
Publisher: IEEE
Date: 18-07-2021
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 12-04-2023
DOI: 10.36227/TECHRXIV.22591426.V1
Abstract: This paper presents a novel approach to video super-resolution (VSR) by focusing on the selection of input frames, a process critical to VSR. VSR methods typically rely on deep learning techniques, those that are able to learn features from a large dataset of low-resolution (LR) and corresponding high-resolution (HR) videos and generate high-quality HR frames from any new LR input frames using the learned features. However, these methods often use as input the immediate neighbouring frames to a given target frame without considering the importance and dynamics of the frames across the temporal dimension of a video. This work aims to address the limitations of the conventional sliding-window mechanisms by developing input frame selection algorithms. By dynamically selecting the most representative neighbouring frames based on content-aware selection measures, our proposed algorithms enable VSR models to extract more informative and accurate features that are better aligned with the target frame, leading to improved performance and higher-quality HR frames. Through an empirical study, we demonstrate that the proposed dynamic content-aware selection mechanism improves super-resolution results without any additional architectural overhead, offering a counter-intuitive yet effective alternative to the long-established trend of increasing architectural complexity to improve VSR results.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 23-08-2022
DOI: 10.36227/TECHRXIV.20494851.V1
Abstract: Omnidirectional Videos (or 360° videos) are widely used in Virtual Reality (VR) to facilitate immersive and interactive viewing experiences. However, the limited spatial resolution in 360° videos does not allow for each degree of view to be represented with adequate pixels, limiting the visual quality offered in the immersive experience. Deep learning Video Super-Resolution (VSR) techniques used for conventional videos could provide a promising software-based solution however, these techniques do not tackle the distortion present in equirecentagular projections of 360° video signals. An additional obstacle is the limited 360° video datasets to study. To address these issues, this paper creates a novel 360° Video Dataset (360VDS) with a study of the extensibility of conventional VSR models to 360° videos. This paper further proposes a novel deep learning model for 360° Video Super-Resolution (360° VSR), called Spherical Signal Super-resolution with Proportioned Optimisation (S3PO). S3PO adopts recurrent modelling with attention mechanism, unbound from conventional VSR techniques like alignment. With a purpose built feature extractor and a novel loss function addressing spherical distortion, S3PO outperforms most state-of-the-art conventional VSR models and 360° specific super-resolution models on 360° video datasets. A step-wise ablation study is presented to understand and demonstrate the impact of the chosen architectural sub-components, targeted training and optimisation.
Location: Australia
Location: No location found
Location: No location found
No related grants have been discovered for Arbind Agrahari Baniya.