ORCID Profile
0000-0001-9754-6496
Current Organisation
University of Melbourne
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
In Research Link Australia (RLA), "Research Topics" refer to ANZSRC FOR and SEO codes. These topics are either sourced from ANZSRC FOR and SEO codes listed in researchers' related grants or generated by a large language model (LLM) based on their publications.
Global Information Systems | Computer Software | Software Engineering | Operating Systems | Distributed and Grid Systems | Information Systems | Distributed Computing | Artificial Intelligence and Image Processing | Information Storage, Retrieval And Management | Information Systems Organisation | Computer Communications Networks | Signal Processing | Structural Chemistry | Interorganisational Information Systems | Biological Sciences Not Elsewhere Classified | Optimisation | Physical Chemistry (Incl. Structural) | Pattern Recognition and Data Mining | Networking and Communications | Atomic, Molecular, Nuclear, Particle and Plasma Physics | Computer Software Not Elsewhere Classified | Natural Resource Management | Global Information Systems | Communications Technologies | Engineering/Technology Instrumentation | Decision Support And Group Support Systems | Transport Engineering | Simulation And Modelling | Physiology Not Elsewhere Classified | Structural Engineering | Environmental Engineering | Electrical and Electronic Engineering | Wireless Communications | Environmental Engineering Modelling | Ubiquitous Computing | Mobile Technologies | Signal Processing | Image Processing | Analysis Of Algorithms And Complexity | Astronomy And Astrophysics | Nuclear And Particle Physics |
Information processing services | Application tools and system utilities | Internet Hosting Services (incl. Application Hosting Services) | Application Tools and System Utilities | Technological and organisational innovation | Application packages | Physical sciences | Information and Communication Services not elsewhere classified | Treatments (e.g. chemicals, antibiotics) | Other | Communication services not elsewhere classified | Navy | Weather | Combined operations | Atmospheric Composition (incl. Greenhouse Gas Inventory) | Information services not elsewhere classified | Integrated (ecosystem) assessment and management | Forestry not elsewhere classified | Urogenital system and disorders | Health not elsewhere classified | Transport not elsewhere classified | Urban and Industrial Air Quality | Oil and gas | Diagnostic methods | Scientific instrumentation | Computer software and services not elsewhere classified | Information Processing Services (incl. Data Entry and Capture) | Health Status (e.g. Indicators of Well-Being) | Environmentally Sustainable Transport not elsewhere classified
Publisher: IEEE
Date: 06-2019
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2023
Publisher: IEEE
Date: 12-2018
Publisher: IEEE
Date: 12-2018
Publisher: IEEE
Date: 11-2019
Publisher: Wiley
Date: 29-08-2022
DOI: 10.1002/SPE.3144
Abstract: Several global health incidents and evidences show the increasing likelihood of pandemics (large‐scale outbreaks of infectious disease), which has adversely affected all aspects of human lives. It is essential to develop an analytics framework by extracting and incorporating the knowledge of heterogeneous data‐sources to deliver insights for enhancing preparedness to combat the pandemic. Specifically, human mobility, travel history , and other transport statistics have significantly impact on the spread of any infectious disease. This article proposes a spatio‐temporal knowledge mining framework, named STOPPAGE , to model the impact of human mobility and other contextual information over the large geographic areas in different temporal scales. The framework has two key modules: (i) spatio‐temporal data and computing infrastructure using fog/edge based architecture and (ii) spatio‐temporal data analytics module to efficiently extract knowledge from heterogeneous data sources. We created a pandemic‐knowledge graph to discover correlations among mobility information and disease spread, a deep learning architecture to predict the next hotspot zones. Further, we provide necessary support in home‐health monitoring utilizing Femtolet and fog/edge based solutions. The experimental evaluations on real‐life datasets related to COVID‐19 in India illustrate the efficacy of the proposed methods. STOPPAGE outperforms the existing works and baseline methods in terms of accuracy by (18–21)% in predicting hotspots and reduces the power consumption of the smartphone significantly. The scalability study yields that the STOPPAGE framework is flexible enough to analyze a huge amount of spatio‐temporal datasets and reduces the delay in predicting health status compared to the existing studies.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 03-2021
Publisher: Springer Science and Business Media LLC
Date: 02-08-2017
Publisher: Springer International Publishing
Date: 2023
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 12-2022
Publisher: Elsevier BV
Date: 06-2019
Publisher: IEEE
Date: 05-2020
Publisher: Wiley
Date: 02-05-2021
DOI: 10.1002/CPE.5323
Publisher: Hindawi Limited
Date: 2014
DOI: 10.1155/2014/150637
Abstract: The cochlea plays a crucial role in mammal hearing. The basic function of the cochlea is to map sounds of different frequencies onto corresponding characteristic positions on the basilar membrane (BM). Sounds enter the fluid-filled cochlea and cause deflection of the BM due to pressure differences between the cochlear fluid chambers. These deflections travel along the cochlea, increasing in litude, until a frequency-dependent characteristic position and then decay away rapidly. The hair cells can detect these deflections and encode them as neural signals. Modelling the mechanics of the cochlea is of help in interpreting experimental observations and also can provide predictions of the results of experiments that cannot currently be performed due to technical limitations. This paper focuses on reviewing the numerical modelling of the mechanical and electrical processes in the cochlea, which include fluid coupling, micromechanics, the cochlear lifier, nonlinearity, and electrical coupling.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2023
Publisher: Wiley
Date: 14-06-2021
DOI: 10.1002/SPE.3012
Publisher: Elsevier BV
Date: 2021
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 10-2018
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 11-2020
Publisher: Elsevier BV
Date: 08-2019
Publisher: Wiley
Date: 30-11-2020
DOI: 10.1002/CPE.6096
Abstract: Rapid increase in energy consumption is a serious problem in cloud storage systems. Data accessed in large‐scale storage systems usually exhibit temporal and spatial characteristics, which make it possible to reduce energy consumption by clustering data with similar access characteristics for storage in the same zone of cloud storage systems. Existing works usually only focus on the frequency of data access. However, widely existing phenomena show data access with seasonal and tidal characteristics in cloud storage systems. The seasonal and tidal characteristics of data access are extracted thoroughly in this paper. According to the extracted data access characteristics, energy‐aware data clustering through a machine learning algorithm (K‐ear) is proposed. K‐ear classifies data into five seasonal categories according to their seasonal access characteristics and then classifies every seasonal category into three tidal categories according to its tidal access characteristics. The 15 classified categories are stored in different storage zones with different energy and performance modes. Simulation experiments using CloudSimDisk with the constructed mathematic models demonstrate that the proposed K‐ear algorithm is more energy‐efficient than the default data clustering algorithms in Hadoop and the classical data clustering storage strategy according to the data access frequency (Striping‐Based Energy‐Aware Strategy).
Publisher: Elsevier BV
Date: 09-2018
Publisher: Association for Computing Machinery (ACM)
Date: 19-11-2019
DOI: 10.1145/3241737
Abstract: The Cloud computing paradigm has revolutionised the computer science horizon during the past decade and has enabled the emergence of computing as the fifth utility. It has captured significant attention of academia, industries, and government bodies. Now, it has emerged as the backbone of modern economy by offering subscription-based services anytime, anywhere following a pay-as-you-go model. This has instigated (1) shorter establishment times for start-ups, (2) creation of scalable global enterprise applications, (3) better cost-to-value associativity for scientific and high-performance computing applications, and (4) different invocation/execution models for pervasive and ubiquitous applications. The recent technological developments and paradigms such as serverless computing, software-defined networking, Internet of Things, and processing at network edge are creating new opportunities for Cloud computing. However, they are also posing several new challenges and creating the need for new approaches and research strategies, as well as the re-evaluation of the models that were developed to address issues such as scalability, elasticity, reliability, security, sustainability, and application models. The proposed manifesto addresses them by identifying the major open challenges in Cloud computing, emerging trends, and impact areas. It then offers research directions for the next decade, thus helping in the realisation of Future Generation Cloud Computing.
Publisher: Springer International Publishing
Date: 27-07-2018
Publisher: Wiley
Date: 27-05-2021
DOI: 10.1002/BSE.2832
Abstract: The idea that green banking disclosure leads to increased firm value has been rightly considered as over‐simplistic. This paper builds on key prior insights by investigating whether combining green disclosure with other contextual factor, such as non‐performing loans, provides additional insight into the complex green disclosure–firm value relationship in a regulatory setting where green law has recently been enacted for the banking industry. We present an analysis of seven years of data sourced from listed banks in Bangladesh (2008–2014), with data analysed using multiple regression. Our findings indicate that, while green disclosure has a positive effect on the overall firm value of banks, this positive effect is negatively moderated by banks' non‐performing loans. This research contributes to the knowledge by showing that green disclosure alone is insufficient for creating market value for banks. Additional contextual matters need attention to understand the impact of green disclosure in contributing to increased market value for banks.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2018
Publisher: Elsevier BV
Date: 08-2019
Publisher: Springer Science and Business Media LLC
Date: 08-2017
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 07-2021
Publisher: Elsevier BV
Date: 04-2020
Publisher: Wiley
Date: 10-08-2023
DOI: 10.1002/SPE.3248
Abstract: Cloud computing has become a critical infrastructure for modern society, like electric power grids and roads. As the backbone of the modern economy, it offers subscription‐based computing services anytime, anywhere, on a pay‐as‐you‐go basis. Its use is growing exponentially with the continued development of new classes of applications driven by a huge number of emerging networked devices. However, the success of Cloud computing has created a new global energy challenge, as it comes at the cost of vast energy usage. Currently, data centres hosting Cloud services world‐wide consume more energy than most countries. Globally, by 2025, they are projected to consume 20% of global electricity and emit up to 5.5% of the world's carbon emissions. In addition, a significant part of the energy consumed is transformed into heat which leads to operational problems, including a reduction in system reliability and the life expectancy of devices, and escalation in cooling requirements. Therefore, for the future generations of Cloud computing to address the environmental and operational consequences of such significant energy usage, they must become energy‐efficient and environmentally sustainable while continuing to deliver high‐quality services. In this article, we propose a vision for learning‐centric approach for the integrated management of new generation Cloud computing environments to reduce their energy consumption and carbon footprint while delivering service quality guarantees. In this article, we identify the dimensions and key issues of integrated resource management and our envisioned approaches to address them. We present a conceptual architecture for energy‐efficient new generation Clouds and early results on the integrated management of resources and workloads that evidence its potential benefits towards energy efficiency and sustainability.
Publisher: Elsevier BV
Date: 2020
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 07-2023
Publisher: Association for Computing Machinery (ACM)
Date: 30-11-2018
DOI: 10.1145/3186592
Abstract: The fog computing paradigm has drawn significant research interest as it focuses on bringing cloud-based services closer to Internet of Things (IoT) users in an efficient and timely manner. Most of the physical devices in the fog computing environment, commonly named fog nodes, are geographically distributed, resource constrained, and heterogeneous. To fully leverage the capabilities of the fog nodes, large-scale applications that are decomposed into interdependent Application Modules can be deployed in an orderly way over the nodes based on their latency sensitivity. In this article, we propose a latency-aware Application Module management policy for the fog environment that meets the erse service delivery latency and amount of data signals to be processed in per unit of time for different applications. The policy aims to ensure applications’ Quality of Service (QoS) in satisfying service delivery deadlines and to optimize resource usage in the fog environment. We model and evaluate our proposed policy in an iFogSim-simulated fog environment. Results of the simulation studies demonstrate significant improvement in performance over alternative latency-aware strategies.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 03-2020
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 03-2021
Publisher: Springer Science and Business Media LLC
Date: 19-01-2018
Publisher: Association for Computing Machinery (ACM)
Date: 22-07-2020
DOI: 10.1145/3403955
Abstract: The Internet of Things (IoT) paradigm is being rapidly adopted for the creation of smart environments in various domains. The IoT-enabled cyber-physical systems associated with smart city, healthcare, Industry 4.0 and Agtech handle a huge volume of data and require data processing services from different types of applications in real time. The Cloud-centric execution of IoT applications barely meets such requirements as the Cloud datacentres reside at a multi-hop distance from the IoT devices. Fog computing , an extension of Cloud at the edge network, can execute these applications closer to data sources. Thus, Fog computing can improve application service delivery time and resist network congestion. However, the Fog nodes are highly distributed and heterogeneous, and most of them are constrained in resources and spatial sharing. Therefore, efficient management of applications is necessary to fully exploit the capabilities of Fog nodes. In this work, we investigate the existing application management strategies in Fog computing and review them in terms of architecture, placement and maintenance. Additionally, we propose a comprehensive taxonomy and highlight the research gaps in Fog-based application management. We also discuss a perspective model and provide future research directions for further improvement of application management in Fog computing.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 10-2020
Publisher: Elsevier BV
Date: 12-2018
Publisher: Elsevier BV
Date: 04-2020
Publisher: IEEE
Date: 12-2019
Publisher: Elsevier BV
Date: 11-2019
Publisher: Wiley
Date: 07-06-2022
DOI: 10.1002/SPE.3113
Abstract: Self‐adaptive security methods have been extensively leveraged for securing software systems and users from runtime threats in online and elastic environments, such as the cloud. The existing solutions treat security as an aggregated quality by enforcing “one service for all” without considering the explicit security requirements of each asset or the costs associated with security. Dealing with the security of assets in ultra‐large environments calls for rethinking the way we select and compose services—considering not only the services but the underlying supporting computational resources in the process. We motivate the need for an asset‐centric, self‐adaptive security framework that selects and allocates services and underlying resources in the cloud. The solution leverages learning algorithms and market‐inspired approaches to dynamically manage changes in the runtime security goals/requirements of assets with the provision of suitable services and resources, while catering for monetary and computational constraints. The proposed framework aims to inform the self‐adaptive security efforts of security researchers and practitioners operating in dynamic large‐scale environments, such as the Cloud. To illustrate the utility of the proposed framework it is evaluated using simulation on an application based scenario, involving cloud‐based storage and security services.
Publisher: Wiley
Date: 16-04-2019
DOI: 10.1002/CPE.5306
Publisher: Association for Computing Machinery (ACM)
Date: 31-12-2019
DOI: 10.1145/3317604
Abstract: With the rapid development of cloud computing, various types of cloud services are available in the marketplace. However, it remains a significant challenge for cloud users to find suitable services for two major reasons: (1) Providers are unable to offer services in complete accordance with their declared Service Level Agreements, and (2) it is difficult for customers to describe their requirements accurately. To help users select cloud services efficiently, this article presents a Trust enabled Self-Learning Agent Model for service Matching (TSLAM). TSLAM is a multi-agent-based three-layered cloud service market model, in which different categories of agents represent the corresponding cloud entities to perform market behaviors. The unique feature of brokers is that they are not only the service recommenders but also the participants of market competition. We equip brokers with a learning module enabling them to capture implicit service demands and find user preferences. Moreover, a distributed and lightweight trust model is designed to help cloud entities make service decisions. Extensive experiments prove that TSLAM is able to optimize the cloud service matching process and compared to the state-of-the-art studies, TSLAM improves user satisfaction and the transaction success rate by at least 10%.
Publisher: Elsevier BV
Date: 06-2020
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 09-2018
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 04-2023
Publisher: Association for Computing Machinery (ACM)
Date: 16-10-2019
DOI: 10.1145/3342103
Abstract: This article provides a systematic review of cloud pricing in an interdisciplinary approach. It examines many historical cases of pricing in practice and tracks down multiple roots of pricing in research. The aim is to help both cloud service provider (CSP) and cloud customers to capture the essence of cloud pricing when they need to make a critical decision either to achieve competitive advantages or to manage cloud resource effectively. Currently, the number of available pricing schemes in the cloud market is overwhelming. It is an intricate issue to understand these schemes and associated pricing models clearly due to involving several domains of knowledge, such as cloud technologies, microeconomics, operations research, and value theory. Some earlier studies have introduced this topic unsystematically. Their approaches inevitably lead to much confusion for many cloud decision-makers. To address their weaknesses, we present a comprehensive taxonomy of cloud pricing, which is driven by a framework of three fundamental pricing strategies that are built on nine cloud pricing categories. These categories can be further mapped onto a total of 60 pricing models. Many of the pricing models have been already adopted by CSPs. Others have been widespread across in other industries. We give descriptions of these model categories and highlight both advantages and disadvantages. Moreover, this article offers an extensive survey of many cloud pricing models that were proposed by many researchers during the past decade. Based on the survey, we identify four trends of cloud pricing and the general direction, which is moving from intrinsic value per physical box to extrinsic value per serverless sandbox. We conclude that hyper-converged cloud resources pool supported by cloud orchestration, virtual machine, Open Application Programming Interface, and serverless sandbox will drive the future of cloud pricing.
Publisher: Elsevier BV
Date: 11-2018
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 09-2018
Publisher: IEEE
Date: 11-2019
Publisher: Association for Computing Machinery (ACM)
Date: 06-02-2020
DOI: 10.1145/3368036
Abstract: Workflows are an application model that enables the automated execution of multiple interdependent and interconnected tasks. They are widely used by the scientific community to manage the distributed execution and dataflow of complex simulations and experiments. As the popularity of scientific workflows continue to rise, and their computational requirements continue to increase, the emergence and adoption of multi-tenant computing platforms that offer the execution of these workflows as a service becomes widespread. This article discusses the scheduling and resource provisioning problems particular to this type of platform. It presents a detailed taxonomy and a comprehensive survey of the current literature and identifies future directions to foster research in the field of multiple workflow scheduling in multi-tenant distributed computing systems.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2018
Publisher: IEEE
Date: 05-2020
Publisher: IEEE
Date: 07-2019
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 07-2022
Publisher: IEEE
Date: 12-2017
Publisher: Elsevier BV
Date: 11-2018
Publisher: Elsevier BV
Date: 08-2020
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2018
Publisher: Wiley
Date: 31-01-2022
DOI: 10.1002/SPE.3069
Abstract: Microservices have gained wide recognition and acceptance in software industries as an emerging architectural style for autonomous, scalable and more reliable computing. A critical problem related to microservices is reasoning about the suitable granularity level of a microservice (i.e., when and how to merge or decompose microservices). Although scalability is pronounced as one of the major factors for adoption of microservices, there is a general gap of approaches that systematically analyse the dimensions and metrics, which are important for scalability‐aware granularity adaptation decisions. To the best of our knowledge, the state‐of‐art in reasoning about microservice granularity adaptation is neither: (1) driven by microservice‐specific scalability dimensions and metrics nor (2) follow systematic scalability analysis to make scalability‐aware adaptation decisions. In this article, we address the aforementioned problems using a two‐fold contribution. Firstly, we contribute to a working catalogue of microservice‐specific scalability dimensions and metrics. Secondly, we describe a novel application of scalability goal‐obstacle analysis for the context of reasoning about microservice granularity adaptation. We analyse both contributions by comparing their usage on a hypothetical microservice architecture against ad‐hoc scalability assessment for the same architecture. This analysis shows how both contributions can aid making scalability‐aware granularity adaptation decisions.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2018
Publisher: Association for Computing Machinery (ACM)
Date: 17-04-2020
DOI: 10.1145/3378447
Abstract: Containers, as a lightweight application virtualization technology, have recently gained immense popularity in mainstream cluster management systems like Google Borg and Kubernetes. Prevalently adopted by these systems for task deployments of erse workloads such as big data, web services, and IoT, they support agile application deployment, environmental consistency, OS distribution portability, application-centric management, and resource isolation. Although most of these systems are mature with advanced features, their optimization strategies are still tailored to the assumption of a static cluster. Elastic compute resources would enable heterogeneous resource management strategies in response to the dynamic business volume for various types of workloads. Hence, we propose a heterogeneous task allocation strategy for cost-efficient container orchestration through resource utilization optimization and elastic instance pricing with three main features. The first one is to support heterogeneous job configurations to optimize the initial placement of containers into existing resources by task packing. The second one is cluster size adjustment to meet the changing workload through autoscaling algorithms. The third one is a rescheduling mechanism to shut down underutilized VM instances for cost saving and reallocate the relevant jobs without losing task progress. We evaluate our approach in terms of cost and performance on the Australian National Cloud Infrastructure (Nectar). Our experiments demonstrate that the proposed strategy could reduce the overall cost by 23% to 32% for different types of cloud workload patterns when compared to the default Kubernetes framework.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 07-2021
Publisher: Association for Computing Machinery (ACM)
Date: 31-12-2021
DOI: 10.1145/3529162
Abstract: In Dynamic Service Composition (DSC), an application can be dynamically composed using web services to achieve its functional and Quality of Services (QoS) goals. DSC is a relatively mature area of research that crosscuts autonomous and services computing. Complex autonomous and self-adaptive computing paradigms (e.g., multi-tenant cloud services, mobile/smart services, services discovery and composition in intelligent environments such as smart cities) have been leveraging DSC to dynamically and adaptively maintain the desired QoS, cost and to stabilize long-lived software systems. While DSC is fundamentally known to be an NP-hard problem, systematic attempts to analyze its scalability have been limited, if not absent, though such analysis is of a paramount importance for their effective, efficient, and stable operations. This article reports on a new application of goal-modeling, providing a systematic technique that can support DSC designers and architects in identifying DSC-relevant characteristics and metrics that can potentially affect the scalability goals of a system. The article then applies the technique to two different approaches for QoS-aware dynamic services composition, where the article describes two detailed exemplars that exemplify its application. The exemplars hope to provide researchers and practitioners with guidance and transferable knowledge in situations where the scalability analysis may not be straightforward. The contributions provide architects and designers for QoS-aware dynamic service composition with the fundamentals for assessing the scalability of their own solutions, along with goal models and a list of application domain characteristics and metrics that might be relevant to other solutions. Our experience has shown that the technique was able to identify in both exemplars application domain characteristics and metrics that had been overlooked in previous scalability analyses of these DSC, some of which indeed limited their scalability. It has also shown that the experiences and knowledge can be transferable: The first exemplar was used as an ex le to inform and ease the work of applying the technique in the second one, reducing the time to create the model, even for a non-expert.
Publisher: Elsevier BV
Date: 2020
Publisher: Association for Computing Machinery (ACM)
Date: 11-12-2017
DOI: 10.1145/3136623
Abstract: Storage as a Service (StaaS) is a vital component of cloud computing by offering the vision of a virtually infinite pool of storage resources. It supports a variety of cloud-based data store classes in terms of availability, scalability, ACID (Atomicity, Consistency, Isolation, Durability) properties, data models, and price options. Application providers deploy these storage classes across different cloud-based data stores not only to tackle the challenges arising from reliance on a single cloud-based data store but also to obtain higher availability, lower response time, and more cost efficiency. Hence, in this article, we first discuss the key advantages and challenges of data-intensive applications deployed within and across cloud-based data stores. Then, we provide a comprehensive taxonomy that covers key aspects of cloud-based data store: data model, data dispersion, data consistency, data transaction service, and data management cost. Finally, we map various cloud-based data stores projects to our proposed taxonomy to validate the taxonomy and identify areas for future research.
Publisher: Springer International Publishing
Date: 2020
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 03-2022
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2020
Publisher: ACM
Date: 29-06-2020
Publisher: Elsevier BV
Date: 02-2021
Publisher: Wiley
Date: 08-07-2020
DOI: 10.1002/CPE.5926
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 05-2022
Publisher: IEEE
Date: 12-2019
Publisher: IEEE
Date: 12-2018
Publisher: Wiley
Date: 17-11-2022
DOI: 10.1002/SPE.3055
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2020
Publisher: Wiley
Date: 14-08-2018
DOI: 10.1002/CPE.4834
Publisher: Elsevier BV
Date: 09-2019
Publisher: Elsevier BV
Date: 09-2019
Publisher: IEEE
Date: 12-2018
Publisher: SCITEPRESS - Science and Technology Publications
Date: 2018
Publisher: IEEE
Date: 07-2019
Publisher: Springer Singapore
Date: 2017
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 10-2022
Publisher: Wiley
Date: 07-03-2017
DOI: 10.1002/CPE.4126
Publisher: Elsevier BV
Date: 12-2019
Publisher: Wiley
Date: 15-05-2017
DOI: 10.1002/CPE.4125
Publisher: Wiley
Date: 22-10-2019
DOI: 10.1002/SPE.2755
Publisher: Wiley
Date: 06-03-2017
DOI: 10.1002/CPE.4123
Publisher: IEEE
Date: 12-2018
Publisher: IEEE
Date: 08-2019
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 05-2020
Publisher: IEEE
Date: 12-2019
Publisher: Association for Computing Machinery (ACM)
Date: 31-01-2022
DOI: 10.1145/3510415
Abstract: Containerization is a lightweight application virtualization technology, providing high environmental consistency, operating system distribution portability, and resource isolation. Existing mainstream cloud service providers have prevalently adopted container technologies in their distributed system infrastructures for automated application management. To handle the automation of deployment, maintenance, autoscaling, and networking of containerized applications, container orchestration is proposed as an essential research problem. However, the highly dynamic and erse feature of cloud workloads and environments considerably raises the complexity of orchestration mechanisms. Machine learning algorithms are accordingly employed by container orchestration systems for behavior modeling and prediction of multi-dimensional performance metrics. Such insights could further improve the quality of resource provisioning decisions in response to the changing workloads under complex environments. In this article, we present a comprehensive literature review of existing machine learning-based container orchestration approaches. Detailed taxonomies are proposed to classify the current researches by their common features. Moreover, the evolution of machine learning-based container orchestration technologies from the year 2016 to 2021 has been designed based on objectives and metrics. A comparative analysis of the reviewed techniques is conducted according to the proposed taxonomies, with emphasis on their key characteristics. Finally, various open research challenges and potential future directions are highlighted.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2018
Publisher: Elsevier BV
Date: 07-2018
Publisher: Association for Computing Machinery (ACM)
Date: 31-01-2022
DOI: 10.1145/3510412
Abstract: Serverless computing has emerged as an attractive deployment option for cloud applications in recent times. The unique features of this computing model include rapid auto-scaling, strong isolation, fine-grained billing options, and access to a massive service ecosystem, which autonomously handles resource management decisions. This model is increasingly being explored for deployments in geographically distributed edge and fog computing networks as well, due to these characteristics. Effective management of computing resources has always gained a lot of attention among researchers. The need to automate the entire process of resource provisioning, allocation, scheduling, monitoring, and scaling has resulted in the need for specialized focus on resource management under the serverless model. In this article, we identify the major aspects covering the broader concept of resource management in serverless environments and propose a taxonomy of elements that influence these aspects, encompassing characteristics of system design, workload attributes, and stakeholder expectations. We take a holistic view on serverless environments deployed across edge, fog, and cloud computing networks. We also analyse existing works discussing aspects of serverless resource management using this taxonomy. This article further identifies gaps in literature and highlights future research directions for improving capabilities of this computing model.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 03-2023
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2021
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 04-2019
Publisher: Springer Science and Business Media LLC
Date: 26-04-2023
DOI: 10.1007/S40804-023-00284-4
Abstract: A policy shift from soft law to hard law rests on assumptions about motivating compliance. The basic idea is that people comply with soft law for personal, moral reasons but are motivated to comply with hard law by self-interested fear. While logically this is obvious, there is also support for the view that self-determination, organisational justice and social influence are better at motivating compliance in certain contexts. Currently, there is a global policy shift moving corporate social responsibility (CSR) from a voluntary, organisation-based initiative to a practice mandated by law. This shift provides an opportunity to investigate the phenomenon of motivation in law. The current study investigates how the shift to mandatory CSR impacts motivation. Based on an analysis of the programs of 12 firms in Indonesia, we find that CSR hard law appears to motivate CSR without displacing voluntary moral initiatives.
Publisher: IEEE
Date: 12-2018
Publisher: Wiley
Date: 07-10-2021
DOI: 10.1002/SPE.3039
Abstract: Quantum computing (QC) is an emerging paradigm with the potential to offer significant computational advantage over conventional classical computing by exploiting quantum‐mechanical principles such as entanglement and superposition. It is anticipated that this computational advantage of QC will help to solve many complex and computationally intractable problems in several application domains such as drug design, data science, clean energy, finance, industrial chemical development, secure communications, and quantum chemistry. In recent years, tremendous progress in both quantum hardware development and quantum software/algorithm has brought QC much closer to reality. Indeed, the demonstration of quantum supremacy marks a significant milestone in the Noisy Intermediate Scale Quantum (NISQ) era—the next logical step being the quantum advantage whereby quantum computers solve a real‐world problem much more efficiently than classical computing. As the quantum devices are expected to steadily scale up in the next few years, quantum decoherence and qubit interconnectivity are two of the major challenges to achieve quantum advantage in the NISQ era. QC is a highly topical and fast‐moving field of research with significant ongoing progress in all facets. A systematic review of the existing literature on QC will be invaluable to understand the state‐of‐the‐art of this emerging field and identify open challenges for the QC community to address in the coming years. This article presents a comprehensive review of QC literature and proposes taxonomy of QC. The proposed taxonomy is used to map various related studies to identify the research gaps. A detailed overview of quantum software tools and technologies, post‐quantum cryptography, and quantum computer hardware development captures the current state‐of‐the‐art in the respective areas. The article identifies and highlights various open challenges and promising future directions for research and innovation in QC.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 03-2019
Publisher: Wiley
Date: 26-07-2018
DOI: 10.1002/SPE.2623
Publisher: Wiley
Date: 14-08-2018
DOI: 10.1002/SPE.2628
Publisher: Wiley
Date: 03-04-2019
DOI: 10.1002/CPE.5221
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 08-2021
Publisher: Association for Computing Machinery (ACM)
Date: 17-07-2023
DOI: 10.1145/3592598
Abstract: The Fog computing paradigm utilises distributed, heterogeneous and resource-constrained devices at the edge of the network for efficient deployment of latency-critical and bandwidth-hungry IoT application services. Moreover, MicroService Architecture (MSA) is increasingly adopted to keep up with the rapid development and deployment needs of fast-evolving IoT applications. Due to the fine-grained modularity of the microservices and their independently deployable and scalable nature, MSA exhibits great potential in harnessing Fog and Cloud resources, thus giving rise to novel paradigms like Osmotic computing. The loosely coupled nature of the microservices, aided by the container orchestrators and service mesh technologies, enables the dynamic composition of distributed and scalable microservices to achieve erse performance requirements of the IoT applications using distributed Fog resources. To this end, efficient placement of microservice plays a vital role, and scalable placement algorithms are required to utilise the said characteristics of the MSA while overcoming novel challenges introduced by the architecture. Thus, we present a comprehensive taxonomy of recent literature on microservices-based IoT applications placement within Fog computing environments. Furthermore, we organise multiple taxonomies to capture the main aspects of the placement problem, analyse and classify related works, identify research gaps within each category, and discuss future research directions.
Publisher: IEEE
Date: 05-2020
Publisher: Elsevier BV
Date: 07-2019
Publisher: ACM
Date: 02-12-2019
Publisher: IEEE
Date: 05-2020
Publisher: Springer Science and Business Media LLC
Date: 06-02-2018
Publisher: Springer Singapore
Date: 2020
Publisher: Elsevier BV
Date: 08-2019
Publisher: Elsevier BV
Date: 08-2020
Publisher: IEEE
Date: 12-2019
Publisher: Elsevier BV
Date: 07-2020
Publisher: Elsevier BV
Date: 04-2019
Publisher: IEEE
Date: 06-2018
Publisher: Wiley
Date: 09-12-2022
DOI: 10.1002/SPY2.200
Abstract: Traditional and lightweight cryptography primitives and protocols are insecure against quantum attacks. Thus, a real‐time application using traditional or lightweight cryptography primitives and protocols does not ensure full‐proof security. Post‐quantum cryptography is important for the internet of things (IoT) due to its security against quantum attacks. This paper offers a broad literature analysis of post‐quantum cryptography for IoT networks, including the challenges and research directions to adopt in real‐time applications. The work draws focus towards post‐quantum cryptosystems that are useful for resource‐constraint devices. Further, those quantum attacks are surveyed, which may occur over traditional and lightweight cryptographic primitives.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 04-2021
Publisher: Elsevier BV
Date: 04-2019
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 04-2020
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2018
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2020
Publisher: Springer Science and Business Media LLC
Date: 09-03-2018
Publisher: Springer Singapore
Date: 2017
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 10-2021
Publisher: Springer Singapore
Date: 2017
Publisher: Elsevier BV
Date: 07-2022
Publisher: Association for Computing Machinery (ACM)
Date: 04-01-2018
DOI: 10.1145/3150224
Abstract: High performance computing (HPC) clouds are becoming an alternative to on-premise clusters for executing scientific applications and business analytics services. Most research efforts in HPC cloud aim to understand the cost benefit of moving resource-intensive applications from on-premise environments to public cloud platforms. Industry trends show that hybrid environments are the natural path to get the best of the on-premise and cloud resources—steady (and sensitive) workloads can run on on-premise resources and peak demand can leverage remote resources in a pay-as-you-go manner. Nevertheless, there are plenty of questions to be answered in HPC cloud, which range from how to extract the best performance of an unknown underlying platform to what services are essential to make its usage easier. Moreover, the discussion on the right pricing and contractual models to fit small and large users is relevant for the sustainability of HPC clouds. This article brings a survey and taxonomy of efforts in HPC cloud and a vision on what we believe is ahead of us, including a set of research challenges that, once tackled, can help advance businesses and scientific discoveries. This becomes particularly relevant due to the fast increasing wave of new HPC applications coming from big data and artificial intelligence.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2018
Publisher: Springer Singapore
Date: 17-10-2017
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 07-2019
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2018
Publisher: Association for Computing Machinery (ACM)
Date: 13-07-2018
DOI: 10.1145/3148149
Abstract: Web application providers have been migrating their applications to cloud data centers, attracted by the emerging cloud computing paradigm. One of the appealing features of the cloud is elasticity. It allows cloud users to acquire or release computing resources on demand, which enables web application providers to automatically scale the resources provisioned to their applications without human intervention under a dynamic workload to minimize resource cost while satisfying Quality of Service (QoS) requirements. In this article, we comprehensively analyze the challenges that remain in auto-scaling web applications in clouds and review the developments in this field. We present a taxonomy of auto-scalers according to the identified challenges and key properties. We analyze the surveyed works and map them to the taxonomy to identify the weaknesses in this field. Moreover, based on the analysis, we propose new future directions that can be explored in this area.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2022
Publisher: IEEE
Date: 07-2018
Publisher: Springer International Publishing
Date: 2018
Publisher: Springer Science and Business Media LLC
Date: 24-10-2017
Publisher: Springer Singapore
Date: 04-11-2018
Publisher: Elsevier BV
Date: 06-2019
Publisher: Springer Science and Business Media LLC
Date: 26-08-2020
DOI: 10.1186/S13677-020-00188-5
Abstract: Cloud repository is one of the most important services afforded by Cloud Computing where information is preserved, maintained, archived in distant servers and made available to the users over the Internet. Provided with the cloud repository facilities, customers can organize themselves as a cluster and distribute information with one another. In order to allow public integrity auditing on the information stored in semi-trusted cloud server, customers compute the signatures for every chunk of the shared information. When a malicious client is repudiated from the group, the chunks that were outsourced to the cloud server by this renounced customer need to be verified and re-signed by the customer present in the cluster (i.e., the straightforward approach) which results in huge transmission and reckoning cost for the customer. In order to minimize the burden of customers present in the cluster, in the existing scheme Panda, the semi-trusted Cloud Service Provider (CSP) is allowed to compute the R e − s i g n key. Further, the CSP audits and re-signs the revoked customer chunks by utilizing the R e − s i g n key. So, it is easy for the CSP by colluding with the revoked customer to find the secret keys of the existing customer. We introduce a novel Collusion Resistant User Revocable Public Auditing of Shared Data in Cloud ( CRUPA ) by making use of the concept of regression technique. In order to secure the secret keys of the existing customers from the CSP, we have allowed the information proprietor to compute the R e − s i g n key using the regression technique. Whenever the information proprietor revokes the customer from the cluster, the information proprietor computes the R e − s i g n key using the regression technique and sends to the CSP. Further, the CSP audits and re-signs the revoked customer chunks using the R e − s i g n key. The R e − s i g n key computed by the information proprietor using regression method is highly secure and the malicious CSP cannot find the private information of the customers in the cluster. Besides, our mechanism achieves significant improvement in the computation cost of the R e − s i g n key by information proprietor. Further, the proposed scheme is collusion resistant, supports effective and secure customer repudiation, multi-information proprietor batch auditing and is scalable.
Publisher: Springer Science and Business Media LLC
Date: 27-09-2017
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 10-2022
Publisher: ACM
Date: 02-12-2019
Publisher: Springer International Publishing
Date: 2017
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 10-2022
Publisher: Elsevier BV
Date: 05-2019
Publisher: Elsevier BV
Date: 05-2020
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 05-2021
Publisher: Wiley
Date: 30-01-2019
DOI: 10.1002/CPE.5164
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 09-2022
Publisher: Elsevier BV
Date: 10-2019
Publisher: Elsevier BV
Date: 08-2020
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 07-2022
Publisher: Elsevier BV
Date: 02-2018
Publisher: Wiley
Date: 15-05-2017
DOI: 10.1002/CPE.4169
Publisher: IEEE
Date: 05-2020
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2018
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 05-2022
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 03-2020
Publisher: Wiley
Date: 17-12-2018
DOI: 10.1002/SPE.2673
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 05-2020
Publisher: Elsevier BV
Date: 09-2018
Publisher: MDPI AG
Date: 31-10-2020
DOI: 10.3390/PUBLICATIONS8040046
Abstract: Criticism about the practical usefulness of academic accounting research produced in university business schools has been growing for some time. Due to accounting being an applied social science, many stakeholders question the relevance and value of research published in accounting journals to the accounting profession, practitioners and society in general. This paper highlights the various areas of criticism and discusses factors which underline the issue. While most of the criticism is anecdotal, this study sets about to empirically explore practitioners’ perception of academia, and research published in academic accounting journals. To better understand the situation in accounting, a comparison of two other applied academic disciplines is undertaken, involving medical and engineering practitioners. The study found that for accounting there were major differences in the sourcing of information, and significant differences between the other two applied fields with respect to the utilisation and the need for academic material. The findings lead to the conclusion that academic accounting researchers are now nearly totally orced from the real-world profession of accounting. If we were to take a singular view on the purpose of academic accounting research, then the current situation could leave accounting researchers very vulnerable to adverse decisions with respect to the allocation of future government funding. The conclusions of this paper propose a series of thought-provoking questions about the current state of accounting research, in the hope that it will stimulate debate and generate responses from the accounting community and other stakeholders.
Publisher: Elsevier BV
Date: 05-2020
Publisher: Elsevier BV
Date: 02-2018
Publisher: Elsevier BV
Date: 10-2019
Publisher: Association for Computing Machinery (ACM)
Date: 15-12-2023
DOI: 10.1145/3544836
Abstract: Fog computing, as a distributed paradigm, offers cloud-like services at the edge of the network with low latency and high-access bandwidth to support a erse range of IoT application scenarios. To fully utilize the potential of this computing paradigm, scalable, adaptive, and accurate scheduling mechanisms and algorithms are required to efficiently capture the dynamics and requirements of users, IoT applications, environmental properties, and optimization targets. This article presents a taxonomy of recent literature on scheduling IoT applications in Fog computing. Based on our new classification schemes, current works in the literature are analyzed, research gaps of each category are identified, and respective future directions are described.
Publisher: Elsevier BV
Date: 2020
Publisher: Hindawi Limited
Date: 02-04-2021
DOI: 10.1155/2021/5563312
Abstract: The cloud-fog-edge hybrid system is the evolution of the traditional centralized cloud computing model. Through the combination of different levels of resources, it is able to handle service requests from terminal users with a lower latency. However, it is accompanied by greater uncertainty, unreliability, and instability due to the decentralization and regionalization of service processing, as well as the unreasonable and unfairness in resource allocation, task scheduling, and coordination, caused by the autonomy of node distribution. Therefore, this paper introduces blockchain technology to construct a trust-enabled interaction framework in a cloud-fog-edge environment, and through a double-chain structure, it improves the reliability and verifiability of task processing without a big management overhead. Furthermore, in order to fully consider the reasonability and load balance in service coordination and task scheduling, Berger’s model and the conception of service justice are introduced to perform reasonable matching of tasks and resources. We have developed a trust-based cloud-fog-edge service simulation system based on iFogsim, and through a large number of experiments, the performance of the proposed model is verified in terms of makespan, scheduling success rate, latency, and user satisfaction with some classical scheduling models.
Publisher: Elsevier BV
Date: 06-2019
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2023
Publisher: Wiley
Date: 11-11-2018
DOI: 10.1002/SPE.2660
Publisher: Wiley
Date: 24-02-2022
DOI: 10.1002/SPE.3078
Abstract: Internet of Things (IoT) has a pivotal role in developing intelligent and computational solutions to facilitate varied real‐life applications. To execute high‐end computations and data analytics, IoT and cloud‐based solutions play the most significant role. However, frequent communication with long distant cloud servers is not a delay‐aware and energy‐efficient solution while providing time‐critical applications such as healthcare. This article explores the possibilities and opportunities of integrating cloud technology with fog and edge‐based computing to provide healthcare services to users in exigency. Here, we propose an end‐to‐end framework named RESCUE (enabling green healthcare services using integrated iot‐edge‐fog‐cloud computing environments), consisting efficient spatio‐temporal data analytics module for efficient information sharing, spatio‐temporal data analysis to predict the path for users to reach the destination (healthcare center or relief c s) with minimum delay in the time of exigency (say, natural disaster). This module analyzes the collected information through crowd‐sourcing and assists the user by extracting optimal path postdisaster when many regions are nonreachable. Our work is different from the existing literature in varied aspects: it analyses the context and semantics by augmenting real‐time volunteered geographical information (VGI) and refines it. Furthermore, the novel path prediction module incorporates such VGI instances and predicts routes in emergencies avoiding all possible risks. Also, the design of development of a latency‐aware, power‐aware data‐driven analytics system helps to resolve any spatio‐temporal query more efficiently compared to the existing works for any time‐critical application. The experimental and simulation results outperform the baselines in terms of accuracy, delay, and power consumption.
Publisher: Wiley
Date: 11-11-2018
DOI: 10.1002/SPE.2664
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 03-2018
Publisher: IEEE
Date: 04-2018
Publisher: Association for Computing Machinery (ACM)
Date: 12-06-2018
DOI: 10.1145/3199523
Abstract: The world is becoming a more conjunct place and the number of data sources such as social networks, online transactions, web search engines, and mobile devices is increasing even more than had been predicted. A large percentage of this growing dataset exists in the form of linked data, more generally, graphs, and of unprecedented sizes. While today's data from social networks contain hundreds of millions of nodes connected by billions of edges, inter-connected data from globally distributed sensors that forms the Internet of Things can cause this to grow exponentially larger. Although analyzing these large graphs is critical for the companies and governments that own them, big data tools designed for text and tuple analysis such as MapReduce cannot process them efficiently. So, graph distributed processing abstractions and systems are developed to design iterative graph algorithms and process large graphs with better performance and scalability. These graph frameworks propose novel methods or extend previous methods for processing graph data. In this article, we propose a taxonomy of graph processing systems and map existing systems to this classification. This captures the ersity in programming and computation models, runtime aspects of partitioning and communication, both for in-memory and distributed frameworks. Our effort helps to highlight key distinctions in architectural approaches, and identifies gaps for future research in scalable graph systems.
Publisher: Elsevier BV
Date: 05-2019
Publisher: Association for Computing Machinery (ACM)
Date: 30-08-2019
DOI: 10.1145/3337956
Abstract: The dynamic nature of the cloud environment has made the distributed resource management process a challenge for cloud service providers. The importance of maintaining quality of service in accordance with customer expectations and the highly dynamic nature of cloud-hosted applications add new levels of complexity to the process. Advances in big-data learning approaches have shifted conventional static capacity planning solutions to complex performance-aware resource management methods. It is shown that the process of decision-making for resource adjustment is closely related to the behavior of the system, including the utilization of resources and application components. Therefore, a continuous monitoring of system attributes and performance metrics provides the raw data for the analysis of problems affecting the performance of the application. Data analytic methods, such as statistical and machine-learning approaches, offer the required concepts, models, and tools to dig into the data and find general rules, patterns, and characteristics that define the functionality of the system. Obtained knowledge from the data analysis process helps to determine the changes in the workloads, faulty components, or problems that can cause system performance to degrade. A timely reaction to performance degradation can avoid violations of service level agreements, including performing proper corrective actions such as auto-scaling or other resource adjustment solutions. In this article, we investigate the main requirements and limitations of cloud resource management, including a study of the approaches to workload and anomaly analysis in the context of performance management in the cloud. A taxonomy of the works on this problem is presented that identifies main approaches in existing research from the data analysis side to resource adjustment techniques. Finally, considering the observed gaps in the general direction of the reviewed works, a list of these gaps is proposed for future researchers to pursue.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 03-2019
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 06-2021
Publisher: Springer Science and Business Media LLC
Date: 11-06-2019
Publisher: Association for Computing Machinery (ACM)
Date: 18-12-2018
DOI: 10.1145/3241038
Abstract: The cloud-computing paradigm offers on-demand services over the Internet and supports a wide variety of applications. With the recent growth of Internet of Things (IoT)--based applications, the use of cloud services is increasing exponentially. The next generation of cloud computing must be energy efficient and sustainable to fulfill end-user requirements, which are changing dynamically. Presently, cloud providers are facing challenges to ensure the energy efficiency and sustainability of their services. The use of a large number of cloud datacenters increases cost as well as carbon footprints, which further affects the sustainability of cloud services. In this article, we propose a comprehensive taxonomy of sustainable cloud computing. The taxonomy is used to investigate the existing techniques for sustainability that need careful attention and investigation as proposed by several academic and industry groups. The current research on sustainable cloud computing is organized into several categories: application design, sustainability metrics, capacity planning, energy management, virtualization, thermal-aware scheduling, cooling management, renewable energy, and waste heat utilization. The existing techniques have been compared and categorized based on common characteristics and properties. A conceptual model for sustainable cloud computing has been presented along with a discussion on future research directions.
Publisher: Springer Science and Business Media LLC
Date: 07-09-2017
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2019
Publisher: Elsevier BV
Date: 09-2020
Publisher: Wiley
Date: 04-03-2020
DOI: 10.1002/SPE.2813
Publisher: ACM
Date: 02-12-2019
Publisher: Springer Science and Business Media LLC
Date: 12-09-2019
Publisher: Elsevier BV
Date: 10-2014
Publisher: IEEE
Date: 11-2019
Publisher: Elsevier BV
Date: 2021
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 02-2022
Publisher: IEEE
Date: 02-2019
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 05-2022
Publisher: Elsevier BV
Date: 05-2020
Publisher: IEEE
Date: 09-2018
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 05-2023
Publisher: IEEE
Date: 12-2019
Publisher: Walter de Gruyter GmbH
Date: 03-2023
Abstract: An increasingly important form of electronic currency is the ‘stablecoin’. Unlike other high-profile ‘cryptocurrencies’, such as Bitcoin, stablecoins claim to be matched by a corresponding amount of secure assets in a national currency to which the stablecoin can be converted at par. These crypto-assets, however, create significant legal and economic problems. Notably, issues about trust in both issuers and backing assets, the risk of runs, and particular challenges associated with public and private law regimes resist the adoption of stablecoin both in Europe and beyond. This article draws from previous experiences to analyse these issues and propose solutions. Taking a multi-disciplinary approach, the article argues that the legal and economic issues surrounding today’s stablecoins could be addressed using the lessons from prior centuries.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2023
Publisher: Elsevier BV
Date: 2019
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 03-2021
Publisher: Association for Computing Machinery (ACM)
Date: 28-05-2020
DOI: 10.1145/3355399
Abstract: Stream processing is an emerging paradigm to handle data streams upon arrival, powering latency-critical application such as fraud detection, algorithmic trading, and health surveillance. Though there are a variety of Distributed Stream Processing Systems (DSPSs) that facilitate the development of streaming applications, resource management and task scheduling is not automatically handled by the DSPS middleware and requires a laborious process to tune toward specific deployment targets. As the advent of cloud computing has supported renting resources on-demand, it is of great interest to review the research progress of hosting streaming systems in clouds under certain Service Level Agreements (SLA) and cost constraints. In this article, we introduce the hierarchical structure of streaming systems, define the scope of the resource management problem, and present a comprehensive taxonomy in this context covering critical research topics such as resource provisioning, operator parallelisation, and task scheduling. The literature is then reviewed following the taxonomy structure, facilitating a deeper understanding of the research landscape through classification and comparison of existing works. Finally, we discuss the open issues and future research directions toward realising an automatic, SLA-aware resource management framework.
Publisher: Elsevier BV
Date: 04-2018
Publisher: Elsevier BV
Date: 2019
Publisher: IEEE
Date: 02-2019
Publisher: Association for Computing Machinery (ACM)
Date: 20-01-2018
DOI: 10.1145/3122981
Abstract: Mobile cloud computing is emerging as a promising approach to enrich user experiences at the mobile device end. Computation offloading in a heterogeneous mobile cloud environment has recently drawn increasing attention in research. The computation offloading decision making and tasks scheduling among heterogeneous shared resources in mobile clouds are becoming challenging problems in terms of providing global optimal task response time and energy efficiency. In this article, we address these two problems together in a heterogeneous mobile cloud environment as an optimization problem. Different from conventional distributed computing system scheduling problems, our joint offloading and scheduling optimization problem considers unique contexts of mobile clouds such as wireless network connections and mobile device mobility, which makes the problem more complex. We propose a context-aware mixed integer programming model to provide off-line optimal solutions for making the offloading decisions and scheduling the offloaded tasks among the shared computing resources in heterogeneous mobile clouds. The objective is to minimize the global task completion time (i.e., makespan). To solve the problem in real time, we further propose a deterministic online algorithm—the Online Code Offloading and Scheduling (OCOS) algorithm—based on the rent/buy problem and prove the algorithm is 2-competitive. Performance evaluation results show that the OCOS algorithm can generate schedules that have around two times shorter makespan than conventional independent task scheduling algorithms. Also, it can save around 30% more on makespan of task execution schedules than conventional offloading strategies, and scales well as the number of users grows.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 03-2023
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2019
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 10-2020
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 12-2020
Publisher: Springer International Publishing
Date: 21-12-2018
Publisher: Elsevier BV
Date: 10-2019
Publisher: Association for Computing Machinery (ACM)
Date: 23-05-2019
DOI: 10.1145/3190617
Abstract: Software-Defined Networking (SDN) opened up new opportunities in networking with its concept of the segregated control plane from the data-forwarding hardware, which enables the network to be programmable, adjustable, and reconfigurable dynamically. These characteristics can bring numerous benefits to cloud computing, where dynamic changes and reconfiguration are necessary with its on-demand usage pattern. Although researchers have studied utilizing SDN in cloud computing, gaps still exist and need to be explored further. In this article, we propose a taxonomy to depict different aspects of SDN-enabled cloud computing and explain each element in details. The detailed survey of studies utilizing SDN for cloud computing is presented with focus on data center power optimization, traffic engineering, network virtualization, and security. We also present various simulation and empirical evaluation methods that have been developed for SDN-enabled clouds. Finally, we analyze the gap in current research and propose future directions.
Publisher: Elsevier BV
Date: 09-2019
Publisher: Springer International Publishing
Date: 2018
Publisher: Elsevier BV
Date: 02-2018
Publisher: Association for Computing Machinery (ACM)
Date: 13-07-2023
DOI: 10.1145/3589339
Abstract: Traditional, slow and error-prone human-driven methods to configure and manage Internet service requests are proving unsatisfactory. This is due to an increase in Internet applications with stringent quality of service (QoS) requirements. Which demands faster and fault-free service deployment with minimal or without human intervention. With this aim, intent-driven service management (IDSM) has emerged, where users express their service level agreement (SLA) requirements in a declarative manner as intents . With the help of closed control-loop operations, IDSM performs service configurations and deployments, autonomously to fulfill the intents. This results in a faster deployment of services and reduction in configuration errors caused by manual operations, which in turn reduces the SLA violations. This article is an attempt to provide a systematic review of How the IDSM systems manage and fulfill the SLA requirements specified as intents. As an outcome, the review identifies four intent management activities, which are performed in a closed-loop manner. For each activity, a taxonomy is proposed and used to compare the existing techniques for SLA management in IDSM systems. A critical analysis of all the considered research articles in the review and future research directions are presented in the conclusion.
Publisher: Association for Computing Machinery (ACM)
Date: 05-10-2024
DOI: 10.1145/3615353
Abstract: Cloud Data Centers have become the key infrastructure for providing services. Instance migration across different computing nodes in edge and cloud computing is essential to guarantee the quality of service in dynamic environments. Many studies have been conducted on dynamic resource management involving migrating Virtual Machines to achieve various objectives, such as load balancing, consolidation, performance, energy-saving, and disaster recovery. Some have investigated to improve and predict the performance of single live migration. Recently, several research studies service migration in edge-centric computing paradigms. However, there is a lack of taxonomy and survey that focuses on the management of live migration in edge and cloud computing environments. In this paper, we examine the characteristics of each field and propose a migration management-centric taxonomy to provide a holistic framework and guideline for researchers on the topic, including the performance and cost model, migration generations in resource management algorithms, migration planning and scheduling, and migration lifecycle management and orchestration. We also identify research gaps and opportunities to improve the performance of resource management with live migrations.
Publisher: IEEE
Date: 12-2018
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 05-2018
Publisher: Elsevier BV
Date: 2020
Publisher: Elsevier BV
Date: 02-2018
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 10-2018
Publisher: Inderscience Publishers
Date: 2020
Publisher: Elsevier BV
Date: 03-2020
Publisher: Wiley
Date: 27-10-2020
DOI: 10.1002/SPE.2917
Publisher: Elsevier BV
Date: 11-2020
Publisher: Springer Science and Business Media LLC
Date: 08-06-2020
Publisher: Elsevier BV
Date: 09-2019
Publisher: Elsevier BV
Date: 04-2020
Publisher: Association for Computing Machinery (ACM)
Date: 04-01-2018
DOI: 10.1145/3152397
Abstract: Despite the rapid growth of hardware capacity and popularity in mobile devices, limited resources in battery and processing capacity still lack the ability to meet increasing mobile users’ demands. Both conventional techniques and emerging approaches are brought together to fill this gap between user demand and mobile devices’ limited capabilities. Recent research has focused on enhancing the performance of mobile devices via augmentation techniques. Augmentation techniques for mobile cloud computing refer to the computing paradigms and solutions to outsource mobile device computation and storage to more powerful computing resources in order to enhance a mobile device’s computing capability and energy efficiency (e.g., code offloading). Adopting augmentation techniques in the heterogeneous and intermittent mobile cloud computing environment creates new challenges for computation management, energy efficiency, and system reliability. In this article, we aim to provide a comprehensive taxonomy and survey of the existing techniques and frameworks for mobile cloud augmentation regarding both computation and storage. Different from the existing taxonomies in this field, we focus on the techniques aspect, following the idea of realizing a complete mobile cloud computing system. The objective of this survey is to provide a guide on what available augmentation techniques can be adopted in mobile cloud computing systems as well as supporting mechanisms such as decision-making and fault tolerance policies for realizing reliable mobile cloud services. We also present a discussion on the open challenges and future research directions in this field.
Publisher: Association for Computing Machinery (ACM)
Date: 25-01-2019
DOI: 10.1145/3234151
Abstract: Cloud computing has been regarded as an emerging approach to provisioning resources and managing applications. It provides attractive features, such as an on-demand model, scalability enhancement, and management cost reduction. However, cloud computing systems continue to face problems such as hardware failures, overloads caused by unexpected workloads, or the waste of energy due to inefficient resource utilization, which all result in resource shortages and application issues such as delays or saturation. A paradigm, the brownout, has been applied to handle these issues by adaptively activating or deactivating optional parts of applications or services to manage resource usage in cloud computing system. Brownout has successfully shown that it can avoid overloads due to changes in workload and achieve better load balancing and energy saving effects. This article proposes a taxonomy of the brownout approach for managing resources and applications adaptively in cloud computing systems and carries out a comprehensive survey. It identifies open challenges and offers future research directions.
Publisher: Springer Science and Business Media LLC
Date: 25-02-2019
Publisher: IOP Publishing
Date: 11-2018
Publisher: IEEE
Date: 04-2018
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 09-2021
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2023
Publisher: Wiley
Date: 15-05-2018
DOI: 10.1002/SPE.2586
Publisher: Korean Society for Internet Information (KSII)
Date: 31-01-2018
Publisher: Elsevier BV
Date: 11-2020
Publisher: Institution of Engineering and Technology (IET)
Date: 10-03-2021
DOI: 10.1049/NTW2.12013
Publisher: Springer Science and Business Media LLC
Date: 02-12-2017
Publisher: Elsevier BV
Date: 10-2018
Publisher: ACM
Date: 04-01-2018
Start Date: 12-2005
End Date: 06-2008
Amount: $124,442.00
Funder: Australian Research Council
View Funded ActivityStart Date: 06-2012
End Date: 06-2017
Amount: $350,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 2013
End Date: 12-2016
Amount: $315,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 2010
End Date: 12-2012
Amount: $280,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 03-2010
End Date: 03-2013
Amount: $195,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 01-2014
End Date: 07-2018
Amount: $280,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 01-2004
End Date: 12-2007
Amount: $510,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 01-2004
End Date: 12-2006
Amount: $150,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 2008
End Date: 12-2010
Amount: $319,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 06-2008
End Date: 05-2011
Amount: $265,279.00
Funder: Australian Research Council
View Funded ActivityStart Date: 2008
End Date: 12-2008
Amount: $200,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 06-2012
End Date: 12-2016
Amount: $270,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 2008
End Date: 12-2011
Amount: $216,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 12-2005
End Date: 12-2006
Amount: $80,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 2016
End Date: 11-2021
Amount: $410,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 01-2004
End Date: 12-2003
Amount: $10,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 12-2012
End Date: 12-2016
Amount: $786,168.00
Funder: Australian Research Council
View Funded ActivityStart Date: 01-2004
End Date: 11-2004
Amount: $30,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 2020
End Date: 12-2021
Amount: $900,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 02-2004
End Date: 11-2004
Amount: $10,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 02-2005
End Date: 02-2010
Amount: $1,500,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 12-2004
End Date: 12-2010
Amount: $2,250,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 2004
End Date: 12-2004
Amount: $10,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 01-2004
End Date: 06-2004
Amount: $30,000.00
Funder: Australian Research Council
View Funded Activity