ORCID Profile
0000-0002-2798-0104
Current Organisations
Stevens Institute of Technology
,
University of Technology Sydney
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
In Research Link Australia (RLA), "Research Topics" refer to ANZSRC FOR and SEO codes. These topics are either sourced from ANZSRC FOR and SEO codes listed in researchers' related grants or generated by a large language model (LLM) based on their publications.
Neural, Evolutionary and Fuzzy Computation | Pattern Recognition and Data Mining | Data Structures | Artificial Intelligence and Image Processing
Expanding Knowledge in the Information and Computing Sciences | Expanding Knowledge in the Mathematical Sciences | Expanding Knowledge in Technology |
Publisher: MDPI AG
Date: 21-05-2021
DOI: 10.3390/PR9060911
Abstract: Most real-world problems that have two or three objectives are dynamic, and the environment of the problems may change as time goes on. For the purpose of solving dynamic multi-objective problems better, two proposed strategies (second-order difference strategy and random strategy) were incorporated with NSGA-III, namely SDNSGA-III. When the environment changes in SDNSGA-III, the second-order difference strategy and random strategy are first used to improve the in iduals in the next generation population, then NSGA-III is employed to optimize the in iduals to obtain optimal solutions. Our experiments were conducted with two primary objectives. The first was to test the values of the metrics mean inverted generational distance (MIGD), mean generational distance (MGD), and mean hyper volume (MHV) on the test functions (Fun1 to Fun6) via the proposed algorithm and the four state-of-the-art algorithms. The second aim was to compare the metrics’ value of NSGA-III with single strategy and SDNSGA-III, proving the efficiency of the two strategies in SDNSGA-III. The comparative data obtained from the experiments demonstrate that SDNSGA-III has good convergence and ersity compared with four other evolutionary algorithms. What is more, the efficiency of second-order difference strategy and random strategy was also analyzed in this paper.
Publisher: Elsevier BV
Date: 09-2012
Publisher: Elsevier BV
Date: 08-2021
Publisher: Elsevier BV
Date: 11-2023
Publisher: Springer International Publishing
Date: 2015
Publisher: Elsevier BV
Date: 05-2023
Publisher: Springer Science and Business Media LLC
Date: 12-07-2022
DOI: 10.1007/S10479-022-04831-Z
Abstract: To provide health services, hospitals consume electrical power and contribute to the CO 2 emission. This paper aims to develop a modelling approach to optimize hospital services while reducing CO 2 emissions. To capture treatment processes and the production of carbon dioxide, a hybrid method of data mining and simulation–optimization techniques is proposed. Different clustering algorithms are used to categorize patients. Using quality indicators, clustering methods are evaluated to find the best cluster sets, and then patients are categorized accordingly. Discrete-event simulation is applied to each patient category to estimate performance measures such as number of patients being served, waiting times, and length of stay, as well as the amount of CO 2 emission. To optimize performance measures of patient flow, metaheuristic searches have been used. The dataset of Bushehr Heart Hospital is considered as a case study. Based on K-means, K-medoid, Hierarchical clustering, and Fuzzy C-means clustering methods, patients are categorized into two groups of high-risk and low-risk patients. The number of patients being served, total waiting time, length of stay, and CO 2 emitted during care processes are improved for both groups. The proposed hybrid method is an effective method for hospitals to categorize patients based on care processes. The problems and the proposed solution approach reported in this study could be applicable to other hospitals, worldwide to help both optimize the patient flow and minimize the environmental consequences of care services.
Publisher: Elsevier BV
Date: 03-2018
Publisher: Elsevier BV
Date: 12-2017
Publisher: Springer International Publishing
Date: 2016
Publisher: MDPI AG
Date: 03-03-2021
DOI: 10.3390/INFO12030109
Abstract: The novel coronavirus disease, also known as COVID-19, is a disease outbreak that was first identified in Wuhan, a Central Chinese city. In this report, a short analysis focusing on Australia, Italy, and UK is conducted. The analysis includes confirmed and recovered cases and deaths, the growth rate in Australia compared with that in Italy and UK, and the trend of the disease in different Australian regions. Mathematical approaches based on susceptible, infected, and recovered (SIR) cases and susceptible, exposed, infected, quarantined, and recovered (SEIQR) cases models are proposed to predict epidemiology in the above-mentioned countries. Since the performance of the classic forms of SIR and SEIQR depends on parameter settings, some optimization algorithms, namely Broyden–Fletcher–Goldfarb–Shanno (BFGS), conjugate gradients (CG), limited memory bound constrained BFGS (L-BFGS-B), and Nelder–Mead, are proposed to optimize the parameters and the predictive capabilities of the SIR and SEIQR models. The results of the optimized SIR and SEIQR models were compared with those of two well-known machine learning algorithms, i.e., the Prophet algorithm and logistic function. The results demonstrate the different behaviors of these algorithms in different countries as well as the better performance of the improved SIR and SEIQR models. Moreover, the Prophet algorithm was found to provide better prediction performance than the logistic function, as well as better prediction performance for Italy and UK cases than for Australian cases. Therefore, it seems that the Prophet algorithm is suitable for data with an increasing trend in the context of a pandemic. Optimization of SIR and SEIQR model parameters yielded a significant improvement in the prediction accuracy of the models. Despite the availability of several algorithms for trend predictions in this pandemic, there is no single algorithm that would be optimal for all cases.
Publisher: Springer Science and Business Media LLC
Date: 30-03-2022
Publisher: Springer Science and Business Media LLC
Date: 30-08-2022
DOI: 10.1007/S00500-022-07453-6
Abstract: In this paper, a surrogate merit function (SMF) is proposed to be evaluated instead of the traditional merit functions (i.e., penalized weight of the structure). The standard format of the conventional merit functions needs several expensive trial-and-error tuning processes to enhance optimization convergence quality, retuning for different structural model configurations, and final manual local search in case optimization converges to infeasible vicinity of global optimum. However, on the other hand, SMF has no tunning factor but shows statistically stable performance for different models, converges directly to outstanding feasible points, and shows other superior advantages such as reduced required iterations to achieve convergence. In other words, this new function is a no-hassle one due to its brilliant user-friendly application and robust numerical results. SMF might be a revolutionary step in commercializing design optimization in the real-world construction market.
Publisher: IEEE
Date: 12-2015
Publisher: Elsevier BV
Date: 02-2021
Publisher: IEEE
Date: 26-11-2022
Publisher: Elsevier BV
Date: 07-2020
Publisher: Elsevier BV
Date: 06-2022
Publisher: MDPI AG
Date: 20-12-2022
DOI: 10.3390/BDCC7010002
Abstract: Big Data and analytics have become essential factors in managing the COVID-19 pandemic. As no company can escape the effects of the pandemic, mature Big Data and analytics practices are essential for successful decision-making insights and keeping pace with a changing and unpredictable marketplace. The ability to be successful in Big Data projects is related to the organization’s maturity level. The maturity model is a tool that could be applied to assess the maturity level across specific key dimensions, where the maturity levels indicate an organization’s current capabilities and the desirable state. Big Data maturity models (BDMMs) are a new trend with limited publications published as white papers and web materials by practitioners. While most of the related literature might not have covered all of the existing BDMMs, this systematic literature review (SLR) aims to contribute to the body of knowledge and address the limitations in the existing literature about the existing BDMMs, assessment dimensions, and tools. The SLR strategy in this paper was conducted based on guidelines to perform SLR in software engineering by answering three research questions: (1) What are the existing maturity assessment models for Big Data? (2) What are the assessment dimensions for Big Data maturity models? and (3) What are the assessment tools for Big Data maturity models? This SLR covers the available BDMMs written in English and developed by academics and practitioners (2007–2022). By applying a descriptive qualitative content analysis method for the reviewed publications, this SLR identified 15 BDMMs (10 BDMMs by practitioners and 5 BDMMs by academics). Additionally, this paper presents the limitations of existing BDMMs. The findings of this paper could be used as a grounded reference for assessing the maturity of Big Data. Moreover, this paper will provide managers with critical insights to select the BDMM that fits within their organization to support their data-driven decisions. Future work will investigate the Big Data maturity assessment dimensions towards developing a new Big Data maturity model.
Publisher: Elsevier BV
Date: 08-2022
Publisher: Elsevier BV
Date: 03-2014
Publisher: Informa UK Limited
Date: 18-12-2020
Publisher: ACM
Date: 09-04-2022
Publisher: Springer Science and Business Media LLC
Date: 29-07-2011
Publisher: Elsevier BV
Date: 04-2021
Publisher: Elsevier BV
Date: 05-2011
Publisher: MDPI AG
Date: 05-04-2021
DOI: 10.3390/MA14071792
Abstract: Nonlinear dynamic analyses of reinforced concrete (RC) frame buildings require the use of effective stiffness of members to capture the effect of cracked section stiffness. In the design codes and practices, the effective stiffness of RC sections is given as an empirical fraction of the gross stiffness. However, a more precise estimation of the effective stiffness is important as it affects the distribution of forces and various demands and response parameters in nonlinear dynamic analyses. In this study, an evolutionary computation method called gene expression programming (GEP) was used to predict the effective stiffness ratios of RC columns. Constitutive relationships were obtained by correlating the effective stiffness ratio with the four mechanical and geometrical parameters. The model was developed using a database of 226 s les of nonlinear dynamic analysis results collected from another study by the author. Subsequent parametric and sensitivity analyses were performed and the trends of the results were confirmed. The results indicate that the GEP model provides precise estimations of the effective stiffness ratios of the RC frames.
Publisher: Springer Berlin Heidelberg
Date: 2011
Publisher: Elsevier BV
Date: 05-2020
Publisher: IEEE
Date: 26-11-2022
Publisher: Hindawi Limited
Date: 03-05-2021
DOI: 10.1002/INT.22439
Publisher: Elsevier BV
Date: 02-2013
Publisher: Elsevier BV
Date: 09-2023
Publisher: Elsevier
Date: 2021
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2023
Publisher: Elsevier BV
Date: 2017
Publisher: Springer Science and Business Media LLC
Date: 23-08-2019
Publisher: Elsevier BV
Date: 05-2022
DOI: 10.1016/J.COMPBIOMED.2022.105253
Abstract: Over the past two decades, medical imaging has been extensively apply to diagnose diseases. Medical experts continue to have difficulties for diagnosing diseases with a single modality owing to a lack of information in this domain. Image fusion may be use to merge images of specific organs with diseases from a variety of medical imaging systems. Anatomical and physiological data may be included in multi-modality image fusion, making diagnosis simpler. It is a difficult challenge to find the best multimodal medical database with fusion quality evaluation for assessing recommended image fusion methods. As a result, this article provides a complete overview of multimodal medical image fusion methodologies, databases, and quality measurements. In this article, a compendious review of different medical imaging modalities and evaluation of related multimodal databases along with the statistical results is provided. The medical imaging modalities are organized based on radiation, visible-light imaging, microscopy, and multimodal imaging. The medical imaging acquisition is categorized into invasive or non-invasive techniques. The fusion techniques are classified into six main categories: frequency fusion, spatial fusion, decision-level fusion, deep learning, hybrid fusion, and sparse representation fusion. In addition, the associated diseases for each modality and fusion approach presented. The quality assessments fusion metrics are also encapsulated in this article. This survey provides a baseline guideline to medical experts in this technical domain that may combine preoperative, intraoperative, and postoperative imaging, Multi-sensor fusion for disease detection, etc. The advantages and drawbacks of the current literature are discussed, and future insights are provided accordingly.
Publisher: ACM
Date: 09-04-2022
Publisher: Elsevier BV
Date: 11-2019
Publisher: Elsevier BV
Date: 03-2014
Publisher: Informa UK Limited
Date: 26-08-2014
Publisher: Springer Science and Business Media LLC
Date: 25-05-2023
DOI: 10.1038/S41598-023-35457-1
Abstract: Large-scale solar energy production is still a great deal of obstruction due to the unpredictability of solar power. The intermittent, chaotic, and random quality of solar energy supply has to be dealt with by some comprehensive solar forecasting technologies. Despite forecasting for the long-term, it becomes much more essential to predict short-term forecasts in minutes or even seconds prior. Because key factors such as sudden movement of the clouds, instantaneous deviation of temperature in ambiance, the increased proportion of relative humidity and uncertainty in the wind velocities, haziness, and rains cause the undesired up and down r ing rates, thereby affecting the solar power generation to a greater extent. This paper aims to acknowledge the extended stellar forecasting algorithm using artificial neural network common sensical aspect. Three layered systems have been suggested, consisting of an input layer, hidden layer, and output layer feed-forward in conjunction with back propagation. A prior 5-min te output forecast fed to the input layer to reduce the error has been introduced to have a more precise forecast. Weather remains the most vital input for the ANN type of modeling. The forecasting errors might enhance considerably, thereby affecting the solar power supply relatively due to the variations in the solar irradiations and temperature on any forecasting day. Prior approximation of stellar radiations exhibits a small amount of qualm depending upon climatic conditions such as temperature, shading conditions, soiling effects, relative humidity, etc. All these environmental factors incorporate uncertainty regarding the prediction of the output parameter. In such a case, the approximation of PV output could be much more suitable than direct solar radiation. This paper uses Gradient Descent (GD) and Levenberg Maquarndt Artificial Neural Network (LM-ANN) techniques to apply to data obtained and recorded milliseconds from a 100 W solar panel. The essential purpose of this paper is to establish a time perspective with the greatest deal for the output forecast of small solar power utilities. It has been observed that 5 ms to 12 h time perspective gives the best short- to medium-term prediction for April. A case study has been done in the Peer Panjal region. The data collected for four months with various parameters have been applied randomly as input data using GD and LM type of artificial neural network compared to actual solar energy data. The proposed ANN based algorithm has been used for unswerving petite term forecasting. The model output has been presented in root mean square error and mean absolute percentage error. The results exhibit a improved concurrence between the forecasted and real models. The forecasting of solar energy and load variations assists in fulfilling the cost-effective aspects.
Publisher: Springer Science and Business Media LLC
Date: 21-09-2010
Publisher: Springer Science and Business Media LLC
Date: 25-04-2015
Publisher: Springer Science and Business Media LLC
Date: 09-12-2022
DOI: 10.1007/S11831-022-09859-9
Abstract: Most real-world problems involve some type of optimization problems that are often constrained. Numerous researchers have investigated several techniques to deal with constrained single-objective and multi-objective evolutionary optimization in many fields, including theory and application. This presented study provides a novel analysis of scholarly literature on constraint-handling techniques for single-objective and multi-objective population-based algorithms according to the most relevant journals and articles. As a contribution to this study, the paper reviews the main ideas of the most state-of-the-art constraint handling techniques in population-based optimization, and then the study addresses the bibliometric analysis, with a focus on multi-objective, in the field. The extracted papers include research articles, reviews, book/book chapters, and conference papers published between 2000 and 2021 for analysis. The results indicate that the constraint-handling techniques for multi-objective optimization have received much less attention compared with single-objective optimization. The most promising algorithms for such optimization were determined to be genetic algorithms, differential evolutionary algorithms, and particle swarm intelligence. Additionally, “Engineering,” “Computer Science,” and “ Mathematics” were identified as the top three research fields in which future research work is anticipated to increase.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2023
Publisher: Destech Publications, Inc.
Date: 15-03-2022
Abstract: A novel model updating-based damage detection method is proposed that uses the Unwrapped Instantaneous Hilbert Phase (UIHP) of the condensed frequency response function (CFRF) as input to the objective function of an optimisation problem. The novelty of the proposed method lies in two items: (1) using the CFRF instead of the FRF itself, and (2) using the UIHP associated with the columns of the CFRF as input. The proposed modifications bring about the following improvements in the damage detection practice as follows: (1) CFRF will reduce the number of required degrees of freedom (DOFs) to be measured, and (2) the UIHP mitigates the effect of the measurement noise on damage detection. The problem of damage detection in a laminated composite plate with different number of layers and ply orientation has been solved where the results demonstrate the effectiveness of the proposed method.
Publisher: Springer Science and Business Media LLC
Date: 28-01-2011
Publisher: Elsevier BV
Date: 07-2010
Publisher: Elsevier BV
Date: 03-2023
Publisher: Springer Science and Business Media LLC
Date: 17-04-2023
DOI: 10.1007/S10462-023-10466-8
Abstract: Deep learning (DL) is revolutionizing evidence-based decision-making techniques that can be applied across various sectors. Specifically, it possesses the ability to utilize two or more levels of non-linear feature transformation of the given data via representation learning in order to overcome limitations posed by large datasets. As a multidisciplinary field that is still in its nascent phase, articles that survey DL architectures encompassing the full scope of the field are rather limited. Thus, this paper comprehensively reviews the state-of-art DL modelling techniques and provides insights into their advantages and challenges. It was found that many of the models exhibit a highly domain-specific efficiency and could be trained by two or more methods. However, training DL models can be very time-consuming, expensive, and requires huge s les for better accuracy. Since DL is also susceptible to deception and misclassification and tends to get stuck on local minima, improved optimization of parameters is required to create more robust models. Regardless, DL has already been leading to groundbreaking results in the healthcare, education, security, commercial, industrial, as well as government sectors. Some models, like the convolutional neural network (CNN), generative adversarial networks (GAN), recurrent neural network (RNN), recursive neural networks, and autoencoders, are frequently used, while the potential of other models remains widely unexplored. Pertinently, hybrid conventional DL architectures have the capacity to overcome the challenges experienced by conventional models. Considering that capsule architectures may dominate future DL models, this work aimed to compile information for stakeholders involved in the development and use of DL models in the contemporary world.
Publisher: Elsevier BV
Date: 12-2011
Publisher: IEEE
Date: 07-2018
Publisher: IEEE
Date: 07-2020
Publisher: MDPI AG
Date: 04-01-2022
DOI: 10.3390/PR10010098
Abstract: NSGA-II is an evolutionary multi-objective optimization algorithm that has been applied to a wide variety of search and optimization problems since its publication in 2000. This study presents a review and bibliometric analysis of numerous NSGA-II adaptations in addressing scheduling problems. This paper is ided into two parts. The first part discusses the main ideas of scheduling and different evolutionary computation methods for scheduling and provides a review of different scheduling problems, such as production and personnel scheduling. Moreover, a brief comparison of different evolutionary multi-objective optimization algorithms is provided, followed by a summary of state-of-the-art works on the application of NSGA-II in scheduling. The next part presents a detailed bibliometric analysis focusing on NSGA-II for scheduling applications obtained from the Scopus and Web of Science (WoS) databases based on keyword and network analyses that were conducted to identify the most interesting subject fields. Additionally, several criteria are recognized which may advise scholars to find key gaps in the field and develop new approaches in future works. The final sections present a summary and aims for future studies, along with conclusions and a discussion.
Publisher: Springer Science and Business Media LLC
Date: 21-01-2021
Publisher: IEEE
Date: 07-2018
Publisher: Springer Science and Business Media LLC
Date: 24-05-2018
Publisher: American Society of Civil Engineers (ASCE)
Date: 10-2016
Publisher: Elsevier BV
Date: 05-2015
Publisher: Springer Science and Business Media LLC
Date: 15-04-2020
Publisher: Thomas Telford Ltd.
Date: 11-04-2023
Abstract: The purpose of this study is to develop an advanced neural network algorithm as a new optimisation for the optimal design of truss structures. The central concept of the algorithm is based on biological nerve structures and artificial neural networks. The performance of the proposed method is explored in engineering design problems. Two efficient methods for improving the standard neural network algorithm are considered here. The first is an enhanced initialisation mechanism based on opposite-based learning. The second relies on using a few tunable parameters to provide proper exploration and exploitation abilities for the algorithm, enabling better solutions to be found while the required structural analyses are reduced. The new algorithm's performance is investigated by using five well-known restricted benchmarks to assess its efficiency in relation to the latest optimisation techniques. The outcome of the ex les demonstrates that the upgraded version of the algorithm has increased efficacy and robustness in comparison with the original version of the algorithm and some other methods.
Publisher: MDPI AG
Date: 04-03-2022
DOI: 10.3390/APP12052672
Abstract: As smart cities (SCs) emerge, the Internet of Things (IoT) is able to simplify more sophisticated and ubiquitous applications employed within these cities. In this regard, we investigate seven predominant sectors including the environment, public transport, utilities, street lighting, waste management, public safety, and smart parking that have a great effect on SC development. Our findings show that for the environment sector, cleaner air and water systems connected to IoT-driven sensors are used to detect the amount of CO2, sulfur oxides, and nitrogen to monitor air quality and to detect water leakage and pH levels. For public transport, IoT systems help traffic management and prevent train delays, for the utilities sector IoT systems are used for reducing overall bills and related costs as well as electricity consumption management. For the street-lighting sector, IoT systems are used for better control of streetl s and saving energy associated with urban street lighting. For waste management, IoT systems for waste collection and gathering of data regarding the level of waste in the container are effective. In addition, for public safety these systems are important in order to prevent vehicle theft and smartphone loss and to enhance public safety. Finally, IoT systems are effective in reducing congestion in cities and helping drivers to find vacant parking spots using intelligent smart parking.
Publisher: Springer Nature Singapore
Date: 2022
Publisher: Elsevier BV
Date: 03-2023
Publisher: IEEE
Date: 26-11-2022
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 09-2022
Publisher: Springer Science and Business Media LLC
Date: 15-07-2011
Publisher: Elsevier BV
Date: 07-2021
Publisher: MDPI AG
Date: 22-12-2022
DOI: 10.3390/PATHOGENS12010017
Abstract: Pre-trained machine learning models have recently been widely used to detect COVID-19 automatically from X-ray images. Although these models can selectively retrain their layers for the desired task, the output remains biased due to the massive number of pre-trained weights and parameters. This paper proposes a novel batch normalized convolutional neural network (BNCNN) model to identify COVID-19 cases from chest X-ray images in binary and multi-class frameworks with a dual aim to extract salient features that improve model performance over pre-trained image analysis networks while reducing computational complexity. The BNCNN model has three phases: Data pre-processing to normalize and resize X-ray images, Feature extraction to generate feature maps, and Classification to predict labels based on the feature maps. Feature extraction uses four repetitions of a block comprising a convolution layer to learn suitable kernel weights for the features map, a batch normalization layer to solve the internal covariance shift of feature maps, and a max-pooling layer to find the highest-level patterns by increasing the convolution span. The classifier section uses two repetitions of a block comprising a dense layer to learn complex feature maps, a batch normalization layer to standardize internal feature maps, and a dropout layer to avoid overfitting while aiding the model generalization. Comparative analysis shows that when applied to an open-access dataset, the proposed BNCNN model performs better than four other comparative pre-trained models for three-way and two-way class datasets. Moreover, the BNCNN requires fewer parameters than the pre-trained models, suggesting better deployment suitability on low-resource devices.
Publisher: IEEE
Date: 26-11-2022
Publisher: MDPI AG
Date: 28-02-2022
DOI: 10.3390/W14050774
Abstract: Floods are a natural disaster of significant concern because of their considerable damages to people’s livelihood. To this extent, there is a critical need to enhance flood management techniques by establishing proper infrastructure, such as detention basins. Although intelligent models may be adopted for flood management by detention basins, there is a literature gap on the optimum design of such structures while facing flood risks. The presented study filled this research gap by introducing a methodology to obtain the optimum design of detention basins using a stochastic conflict resolution optimization model considering inflow hydrographs uncertainties. This optimization model was developed by minimizing the conditional value-at-risk (CvaR) of flood overtopping, downstream flood damage, and deficit risk of water demand, as well as the deviation of flood overtopping and downstream damage based on non-linear interval number programming (NINP), for four different outlets types via a robust optimization tool, namely the non-dominated sorting genetic algorithm-III (NSGA-III). Conflict resolution was performed using the graph model for conflict resolution (GMCR) technique, enhanced by fuzzy preferences, to comply with the authorities’ priorities. Results indicated that the proposed framework could effectively design optimum detention basins consistent with the regional and hydrological standards.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 06-2019
Publisher: Elsevier BV
Date: 04-2020
Publisher: Elsevier BV
Date: 2020
DOI: 10.2139/SSRN.3711309
Publisher: Informa UK Limited
Date: 23-08-2021
Publisher: Springer Nature Singapore
Date: 2023
Publisher: Hindawi Limited
Date: 24-09-2021
DOI: 10.1002/INT.22657
Publisher: Public Library of Science (PLoS)
Date: 19-09-2018
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 10-2019
Publisher: Hindawi Limited
Date: 2013
DOI: 10.1155/2013/682073
Abstract: To improve the performance of the krill herd (KH) algorithm, in this paper, a Lévy-flight krill herd (LKH) algorithm is proposed for solving optimization tasks within limited computing time. The improvement includes the addition of a new local Lévy-flight (LLF) operator during the process when updating krill in order to improve its efficiency and reliability coping with global numerical optimization problems. The LLF operator encourages the exploitation and makes the krill in iduals search the space carefully at the end of the search. The elitism scheme is also applied to keep the best krill during the process when updating the krill. Fourteen standard benchmark functions are used to verify the effects of these improvements and it is illustrated that, in most cases, the performance of this novel metaheuristic LKH method is superior to, or at least highly competitive with, the standard KH and other population-based optimization methods. Especially, this new method can accelerate the global convergence speed to the true global optimum while preserving the main feature of the basic KH.
Publisher: Elsevier BV
Date: 12-2019
Publisher: Elsevier BV
Date: 09-2019
Publisher: Elsevier BV
Date: 09-2022
DOI: 10.1016/J.CMPB.2022.107027
Abstract: The prediction of multiple drug efficacies using machine learning prediction techniques based on clinical and molecular attributes of tumors is a new approach in the field of precision medicine of oncology. The selection of suitable and effective therapeutic drugs among the potential drugs is performed computationally considering the tumor features. In this study, we developed and validated machine learning models to predict the efficacy of five anti-cancer drugs according to the clinical and molecular attributes of 30 oral squamous cell carcinoma (OSCC) cohorts. This sounds a bit odd - consider: Ranking of the drugs was achieved using their apoptotic priming. We developed multiple drug efficacy prediction models based on three types of tumor characteristics by applying machine learning methods, including multi-target regression (MTR) and support vector regression (SVR). The prediction accuracy of existing machine learning methods was enhanced by introducing novel pre-processing techniques to develop Enhanced MTR (E_MTR), Enhanced Log-based MTR (EL_MTR), Enhanced Multi-target SVR (EM_SVR), and Enhanced Log-based Multi-target SVR (ELM_SVR). As a unique capability, ELM_SVR and EL_MTR rank the drugs based on their predicted efficacy. All the drug efficacy prediction models were built using OSCC real s les and theoretical s les. The best model was selected was based on dataset size and evaluation metrics, such as error terms, residuals and parameter tuning, and cross-validated (CV) using 30 real s les and 340 theoretical s les. When 30 real tumor s les were used for the train-test and CV methods, MTR models predicted the efficacy with less error than SVR models. Comparatively, using 340 theoretical s les for the train-test and CV methods, though MTR improved the performance, SVR predicted the efficacy with zero error. We found that, for small s les, the proposed MTR provided a 0.01 difference between actual apoptotic priming and predicted priming of five drugs. For large s les, the predicted values by the proposed SVR had a difference of 0.00001. The error terms (Actual vs. Predicted) also reveal that the enhanced log model is suitable when MTR is applied. Meanwhile, the enhanced model is suitable for SVR learning for multiple drug efficacy prediction. It was found that the predicted ranks of the drugs based on the multi-targeted efficacy prediction exactly match the actual rankings. We developed efficient statistical and machine learning models using MTR and SVR analysis for anticancer drug efficacy, which will be useful in the field of precision medicine to choose the most suitable drugs in personalized manner. The performance results of the proposed enhanced ranking techniques are described as follows: i) EL_MTR is the best to predict multiple anticancer drug efficacies and improve the accuracy of ranking drugs, irrespective of s le size and ii) ELM_SVR performs better than other MTR models with a large s le size and precise ranking process.
Publisher: Informa UK Limited
Date: 05-2009
Publisher: Wiley
Date: 25-06-2012
DOI: 10.1002/TAL.1033
Publisher: Elsevier BV
Date: 11-2023
Publisher: Elsevier BV
Date: 02-2021
Publisher: IEEE
Date: 26-11-2022
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 03-2023
Publisher: Elsevier BV
Date: 02-2024
Publisher: American Society of Civil Engineers (ASCE)
Date: 11-2023
Publisher: IEEE
Date: 12-2020
Publisher: Springer Science and Business Media LLC
Date: 05-03-2020
Publisher: Springer Science and Business Media LLC
Date: 24-06-2023
Publisher: Vilnius Gediminas Technical University
Date: 19-01-2017
DOI: 10.3846/13923730.2016.1210214
Abstract: This article utilizes gene expression programming (GEP) technique to develop a prediction model in order to automate estimating the construction cost of water and sewer replacement/rehabilitation projects. A database gathered for developing the model was established on the basis of data related to 210 actual water and sewer projects obtained from the City of San Diego, California, USA. To verify the predictability of the GEP model, it was examined to estimate the cost of the projects that were not included in the modelling process. Sensitivity analysis technique and professional experiences were employed to determine the contributions of the qualitative factors and quantifiable parameters affecting the cost estimate. The proposed model with correlation coefficient of 0.8467 is adequately capable of estimating the cost of water and sewer replacement/rehabilitation projects. The GEP-based design equation can easily be used for predesign purposes to help allocate budgets and available limited resources effectively.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 15-08-2021
Publisher: Elsevier BV
Date: 10-2012
Publisher: Springer Science and Business Media LLC
Date: 27-05-2015
Publisher: Elsevier BV
Date: 05-2014
Publisher: Elsevier BV
Date: 2022
DOI: 10.2139/SSRN.4090937
Publisher: Springer Science and Business Media LLC
Date: 05-10-2021
Publisher: MDPI AG
Date: 21-04-2020
Abstract: Research indicates that psychopathology in disaster survivors is a function of both experienced trauma and stressful life events. However, such studies are of limited utility to practitioners who are about to go into a new post-disaster setting as (1) most of them do not indicate which specific traumas and stressors are especially likely to lead to psychopathology and (2) each disaster is characterized by its own unique traumas and stressors, which means that practitioners have to first collect their own data on common traumas, stressors and symptoms of psychopathology prior to planning any interventions. An easy-to-use and easy-to-interpret data analytical method that allows one to identify profiles of trauma and stressors that predict psychopathology would be of great utility to practitioners working in post-disaster contexts. We propose that association rule learning (ARL), a big data mining technique, is such a method. We demonstrate the technique by applying it to data from 337 survivors of the Sri Lankan civil war who completed the Penn/RESIST/Peradeniya War Problems Questionnaire (PRPWPQ), a comprehensive, culturally-valid measure of experienced trauma, stressful life events, anxiety and depression. ARL analysis revealed five profiles of traumas and stressors that predicted the presence of some anxiety, three profiles that predicted the presence of severe anxiety, four profiles that predicted the presence of some depression and five profiles that predicted the presence of severe depression. ARL allows one to identify context-specific associations between specific traumas, stressors and psychological distress, and can be of great utility to practitioners who wish to efficiently analyze data that they have collected, understand the output of that analysis, and use it to provide psychosocial aid to those who most need it in post-disaster settings.
Publisher: Authorea, Inc.
Date: 15-06-2020
Publisher: SPIE
Date: 13-05-2019
DOI: 10.1117/12.2516014
Publisher: Elsevier BV
Date: 12-2011
Publisher: Elsevier BV
Date: 11-2023
Publisher: Informa UK Limited
Date: 28-05-2020
Publisher: Elsevier BV
Date: 03-2022
Publisher: Elsevier BV
Date: 05-2019
Publisher: Elsevier BV
Date: 04-2019
Publisher: MDPI AG
Date: 11-01-2023
DOI: 10.3390/BDCC7010011
Abstract: Integrating machine learning technologies into artificial intelligence (AI) is at the forefront of the scientific and technological tools employed to combat the COVID-19 pandemic. This study assesses different uses and deployments of modern technology for combating the COVID-19 pandemic at various levels, such as image processing, tracking of disease, prediction of outcomes, and computational medicine. The results prove that computerized tomography (CT) scans help to diagnose patients infected by COVID-19. This includes two-sided, multilobar ground glass opacification (GGO) by a posterior distribution or peripheral, primarily in the lower lobes, and fewer recurrences in the intermediate lobe. An extensive search of modern technology databases relating to COVID-19 was undertaken. Subsequently, a review of the extracted information from the database search looked at how technology can be employed to tackle the pandemic. We discussed the technological advancements deployed to alleviate the communicability and effect of the pandemic. Even though there are many types of research on the use of technology in combating COVID-19, the application of technology in combating COVID-19 is still not yet fully explored. In addition, we suggested some open research issues and challenges in deploying AI technology to combat the global pandemic.
Publisher: Elsevier BV
Date: 2017
Publisher: Hindawi Limited
Date: 23-10-2022
DOI: 10.1002/STC.2867
Publisher: Springer Science and Business Media LLC
Date: 05-02-2021
Publisher: Springer Science and Business Media LLC
Date: 03-12-2014
Publisher: Destech Publications, Inc.
Date: 15-03-2022
Abstract: In this paper, an unsupervised condition monitoring of civil infrastructures under environmental and operational variations is proposed. To this end, a couple of the lowest natural frequency signals of the structure recorded over a long period of time are studied for damage. Most of the existing techniques are supervised and require baseline information from the healthy state of the structure. In this paper, however, we explore the possibility of performing unsupervised condition monitoring using Isolation Forest (IF) as a well-known unsupervised machine learning approach. We show that a preprocessing of the frequency signals using the Variational Mode Decomposition (VMD) algorithm plays a key role in the success of the IF algorithm in the unsupervised condition monitoring of structures. To investigate the performance of the proposed method, the benchmark problem of the Z24 bridge is studied in this paper. The results show that the proposed methodology stays successful in most of the cases but at a period of very cold temperature where a nonlinear relationship between the frequencies and temperature presents.
Publisher: Elsevier BV
Date: 06-2021
Publisher: Springer Science and Business Media LLC
Date: 03-11-2021
Publisher: Elsevier BV
Date: 11-2022
DOI: 10.1016/J.COMPBIOMED.2022.106028
Abstract: Blood is made up of leukocytes (WBCs), erythrocytes (RBCs), and thrombocytes. The ratio of blood cancer diseases is increasing rapidly, among which leukemia is one of the famous cancer which may lead to death. Leukemia cancer is initiated by the unnecessary growth of immature WBCs present in the sponge tissues of bone marrow. It is generally analyzed by etiologists by perceiving slides of blood smear images under a microscope. The morphological features and blood cells count facilitated the etiologists to detect leukemia. Due to the late detection and expensive instruments used for leukemia analysis, the death rate has risen significantly. The fluorescence-based cell sorting technique and manual recounts using a hemocytometer are error-prone and imprecise. Leukemia detection methods consist of pre-processing, segmentation, features extraction, and classification. In this article, recent deep learning methodologies and challenges for leukemia detection are discussed. These methods are helpful to examine the microscopic blood smears images and for the detection of leukemia more accurately.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 15-06-2021
Publisher: Elsevier BV
Date: 10-2021
Publisher: Elsevier BV
Date: 06-2023
Publisher: Elsevier BV
Date: 11-2023
Publisher: Springer Science and Business Media LLC
Date: 09-2022
DOI: 10.1007/S00158-022-03318-6
Abstract: This paper investigates the performance of four multi-objective optimization algorithms, namely non-dominated sorting genetic algorithm II (NSGA-II), multi-objective particle swarm optimization (MOPSO), strength Pareto evolutionary algorithm II (SPEA2), and multi-objective multi-verse optimization (MVO), in developing an optimal reinforced concrete cantilever (RCC) retaining wall. The retaining wall design was based on two major requirements: geotechnical stability and structural strength. Optimality criteria were defined as reducing the total cost, weight, CO 2 emission, etc. In this study, two sets of bi-objective strategies were considered: (1) minimum cost and maximum factor of safety, and (2) minimum weight and maximum factor of safety. The proposed method's efficiency was examined using two numerical retaining wall design ex les, one with a base shear key and one without a base shear key. A sensitivity analysis was conducted on the variation of significant parameters, including backfill slope, the base soil’s friction angle, and surcharge load. Three well-known coverage set measures, ersity, and hypervolume were selected to compare the algorithms’ results, which were further assessed using basic statistical measures (i.e., min, max, standard deviation) and the Friedman test with a 95% level of confidence. The results demonstrated that NSGA-II has a higher Friedman rank in terms of coverage set for both cost-based and weight-based designs. SPEA2 and MOPSO outperformed both cost-based and weight-based solutions in terms of ersity in ex les without and with the effects of a base shear key, respectively. However, based on the hypervolume measure, NSGA-II and MVO have a higher Friedman rank for ex les without and with the effects of a base shear key, respectively, for both the cost-based and weight-based designs.
Publisher: Elsevier BV
Date: 09-2021
Publisher: Springer International Publishing
Date: 2022
Publisher: Springer International Publishing
Date: 02-12-2021
Publisher: Elsevier BV
Date: 03-2020
Publisher: Elsevier
Date: 2013
Publisher: Wiley
Date: 22-06-2022
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 09-2022
Publisher: Springer Science and Business Media LLC
Date: 25-06-2022
DOI: 10.1007/S10462-022-10173-W
Abstract: This study proposes the Fire Hawk Optimizer (FHO) as a novel metaheuristic algorithm based on the foraging behavior of whistling kites, black kites and brown falcons. These birds are termed Fire Hawks considering the specific actions they perform to catch prey in nature, specifically by means of setting fire. Utilizing the proposed algorithm, a numerical investigation was conducted on 233 mathematical test functions with dimensions of 2–100, and 150,000 function evaluations were performed for optimization purposes. For comparison, a total of ten different classical and new metaheuristic algorithms were utilized as alternative approaches. The statistical measurements include the best, mean, median, and standard deviation of 100 independent optimization runs, while well-known statistical analyses, such as Kolmogorov–Smirnov, Wilcoxon, Mann–Whitney, Kruskal–Wallis, and Post-Hoc analysis, were also conducted. The obtained results prove that the FHO algorithm exhibits better performance than the compared algorithms from literature. In addition, two of the latest Competitions on Evolutionary Computation (CEC), such as CEC 2020 on bound constraint problems and CEC 2020 on real-world optimization problems including the well-known mechanical engineering design problems, were considered for performance evaluation of the FHO algorithm, which further demonstrated the superior capability of the optimizer over other metaheuristic algorithms in literature. The capability of the FHO is also evaluated in dealing with two of the real-size structural frames with 15 and 24 stories in which the new method outperforms the previously developed metaheuristics.
Publisher: IEEE
Date: 12-2014
Publisher: Springer Science and Business Media LLC
Date: 25-01-2022
DOI: 10.1007/S00158-021-03166-W
Abstract: Problems like improper s ling (s ling on unnecessary variables) and undefined prior distribution (or taking random priors) often occur in model updating. Any such limitations on model parameters can lead to lower accuracy and higher experimental costs (due to more iterations) of structural optimisation. In this work, we explored the effective dimensionality of the model updating problem by leveraging the causal information. In order to utilise the causal structure between the parameters, we used Causal Bayesian Optimisation (CBO), a recent variant of Bayesian Optimisation, to integrate observational and intervention data. We also employed generative models to generate synthetic observational data, which helps in creating a better prior for surrogate models. This case study of a coupled slab structure in a recreational building resulted in the modal updated frequencies which were extracted from the finite element of the structure and compared to measured frequencies from ambient vibration tests found in the literature. The results of mode shapes between experimental and predicted values were also compared using modal assurance criterion (MAC) percentages. The updated frequency and MAC number that was obtained using the proposed model was found in least number of iterations (impacts experimental budget) as compared to previous approaches which optimise the same parameters using same data. This also shows how the causal information has impact on experimental budget.
Publisher: Elsevier BV
Date: 2023
Publisher: Springer Science and Business Media LLC
Date: 16-05-2022
Publisher: IEEE
Date: 03-10-2022
Publisher: Informa UK Limited
Date: 19-05-2023
Publisher: Elsevier BV
Date: 06-2021
Publisher: Elsevier BV
Date: 06-2021
Publisher: MDPI AG
Date: 06-01-2021
DOI: 10.3390/ELECTRONICS10020101
Abstract: This paper presents a comprehensive survey of the meta-heuristic optimization algorithms on the text clustering applications and highlights its main procedures. These Artificial Intelligence (AI) algorithms are recognized as promising swarm intelligence methods due to their successful ability to solve machine learning problems, especially text clustering problems. This paper reviews all of the relevant literature on meta-heuristic-based text clustering applications, including many variants, such as basic, modified, hybridized, and multi-objective methods. As well, the main procedures of text clustering and critical discussions are given. Hence, this review reports its advantages and disadvantages and recommends potential future research paths. The main keywords that have been considered in this paper are text, clustering, meta-heuristic, optimization, and algorithm.
Publisher: Springer Science and Business Media LLC
Date: 13-09-2011
Publisher: Springer Science and Business Media LLC
Date: 02-05-2009
Publisher: Wiley
Date: 04-09-2012
DOI: 10.1002/TAL.1043
Publisher: Elsevier BV
Date: 10-2020
Publisher: SPIE
Date: 13-05-2019
DOI: 10.1117/12.2517440
Publisher: Springer Singapore
Date: 2021
Publisher: MDPI AG
Date: 10-04-2020
DOI: 10.3390/INFO11040203
Abstract: In the past decade, low power consumption schemes have undergone degraded communication performance, where they fail to maintain the trade-off between the resource and power consumption. In this paper, management of resource and power consumption on small cell orthogonal frequency- ision multiple access (OFDMA) networks is enacted using the sleep mode selection method. The sleep mode selection method uses both power and resource management, where the former is responsible for a heterogeneous network, and the latter is managed using a deactivation algorithm. Further, to improve the communication performance during sleep mode selection, a semi-Markov sleep mode selection decision-making process is developed. Spectrum reuse maximization is achieved using a small cell deactivation strategy that potentially identifies and eliminates the sleep mode cells. The performance of this hybrid technique is evaluated and compared against benchmark techniques. The results demonstrate that the proposed hybrid performance model shows effective power and resource management with reduced computational cost compared with benchmark techniques.
Publisher: IEEE
Date: 07-2020
Publisher: Springer Science and Business Media LLC
Date: 08-08-2020
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 06-2023
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2023
Publisher: Elsevier BV
Date: 09-2021
Publisher: Springer Science and Business Media LLC
Date: 22-09-2020
Publisher: Elsevier BV
Date: 10-2022
DOI: 10.1016/J.COMPBIOMED.2022.105990
Abstract: Brain tumors are the most frequently occurring and severe type of cancer, with a life expectancy of only a few months in most advanced stages. As a result, planning the best course of therapy is critical to improve a patient's ability to fight cancer and their quality of life. Various imaging modalities, such as computed tomography (CT), magnetic resonance imaging (MRI) and ultrasound imaging, are commonly employed to assess a brain tumor. This research proposes a novel technique for extracting and classifying tumor features in 3D brain slice images. After input images are processed for noise removal, resizing, and smoothening, features of brain tumor are extracted using Volume of Interest (VOI). The extracted features are then classified using the Deformable Hierarchical Heuristic Model-Deep Deconvolutional Residual Network (DHHM-DDRN) based on surfaces, curves, and geometric patterns. Experimental results show that proposed approach obtained an accuracy of 95%, DSC of 83%, precision of 80%, recall of 85%, and F1 score of 55% for classifying brain cancer features.
Publisher: Springer Science and Business Media LLC
Date: 12-2011
Publisher: Informa UK Limited
Date: 09-2011
Publisher: Springer Science and Business Media LLC
Date: 30-08-2011
Publisher: IEEE
Date: 18-07-2022
Publisher: Elsevier BV
Date: 2012
Publisher: Springer Science and Business Media LLC
Date: 15-07-2022
Publisher: Center for Open Science
Date: 16-10-2020
Abstract: This paper provides a state-of-the-art investigation of advances in data science in emerging economic applications. The analysis was performed on novel data science methods in four in idual classes of deep learning models, hybrid deep learning models, hybrid machine learning, and ensemble models. Application domains include a wide and erse range of economics research from the stock market, marketing, and e-commerce to corporate banking and cryptocurrency. Prisma method, a systematic literature review methodology, was used to ensure the quality of the survey. The findings reveal that the trends follow the advancement of hybrid models, which, based on the accuracy metric, outperform other learning algorithms. It is further expected that the trends will converge toward the advancements of sophisticated hybrid deep learning models.
Publisher: MDPI AG
Date: 18-12-2020
DOI: 10.3390/A13120345
Abstract: Text clustering is one of the efficient unsupervised learning techniques used to partition a huge number of text documents into a subset of clusters. In which, each cluster contains similar documents and the clusters contain dissimilar text documents. Nature-inspired optimization algorithms have been successfully used to solve various optimization problems, including text document clustering problems. In this paper, a comprehensive review is presented to show the most related nature-inspired algorithms that have been used in solving the text clustering problem. Moreover, comprehensive experiments are conducted and analyzed to show the performance of the common well-know nature-inspired optimization algorithms in solving the text document clustering problems including Harmony Search (HS) Algorithm, Genetic Algorithm (GA), Particle Swarm Optimization (PSO) Algorithm, Ant Colony Optimization (ACO), Krill Herd Algorithm (KHA), Cuckoo Search (CS) Algorithm, Gray Wolf Optimizer (GWO), and Bat-inspired Algorithm (BA). Seven text benchmark datasets are used to validate the performance of the tested algorithms. The results showed that the performance of the well-known nurture-inspired optimization algorithms almost the same with slight differences. For improvement purposes, new modified versions of the tested algorithms can be proposed and tested to tackle the text clustering problems.
Publisher: MDPI AG
Date: 12-03-2021
DOI: 10.3390/APP11062529
Abstract: In this research, different amounts of nano-MgO were added to normal concrete s les, and the effect of these particles on the durability of the s les under freeze and thaw conditions was investigated. The compressive and tensile strength as well as the permeability of concrete containing nanoparticles were measured and compared to those of plain s les (without nanoparticles). The age of concrete s les, percentage of nanoparticles, and water-to-binder ratio are the variables of the current research. Based on the results, the addition of 1% nano-MgO to the normal concrete with a water-to-binder ratio of 0.44 can reduce the permeability up to 63% and improve the compressive and tensile strengths by 9.12% and 10.6%, respectively. Gene Expression Programming (GEP) is applied, and three formulations are derived for the prediction of mechanical properties of concrete containing nano-MgO. In this method, 80% of the dataset is used randomly for the training process and 20% is utilized for testing the formulation. The results obtained by GEP showed acceptable accuracy.
Publisher: IEEE
Date: 04-12-2022
Publisher: Springer Berlin Heidelberg
Date: 2013
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2021
Publisher: MDPI AG
Date: 04-01-2022
DOI: 10.3390/ELECTRONICS11010157
Abstract: The healthcare industry is being transformed by the Internet of Things (IoT), as it provides wide connectivity among physicians, medical devices, clinical and nursing staff, and patients to simplify the task of real-time monitoring. As the network is vast and heterogeneous, opportunities and challenges are presented in gathering and sharing information. Focusing on patient information such as health status, medical devices used by such patients must be protected to ensure safety and privacy. Healthcare information is confidentially shared among experts for analyzing healthcare and to provide treatment on time for patients. Cryptographic and biometric systems are widely used, including deep-learning (DL) techniques to authenticate and detect anomalies, andprovide security for medical systems. As sensors in the network are energy-restricted devices, security and efficiency must be balanced, which is the most important concept to be considered while deploying a security system based on deep-learning approaches. Hence, in this work, an innovative framework, the deep Q-learning-based neural network with privacy preservation method (DQ-NNPP), was designed to protect data transmission from external threats with less encryption and decryption time. This method is used to process patient data, which reduces network traffic. This process also reduces the cost and error of communication. Comparatively, the proposed model outperformed some standard approaches, such as thesecure and anonymous biometric based user authentication scheme (SAB-UAS), MSCryptoNet, and privacy-preserving disease prediction (PPDP). Specifically, the proposed method achieved accuracy of 93.74%, sensitivity of 92%, specificity of 92.1%, communication overhead of 67.08%, 58.72 ms encryption time, and 62.72 ms decryption time.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 12-2022
Publisher: Elsevier BV
Date: 11-2021
Publisher: Elsevier BV
Date: 05-2021
Publisher: American Society of Civil Engineers (ASCE)
Date: 09-2012
Publisher: Elsevier BV
Date: 04-2020
Publisher: Springer Science and Business Media LLC
Date: 13-07-2016
Publisher: Springer Science and Business Media LLC
Date: 25-02-2021
Publisher: Elsevier BV
Date: 06-2011
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 08-2023
Publisher: IEEE
Date: 05-12-2021
Publisher: American Society of Civil Engineers (ASCE)
Date: 08-2023
Publisher: Elsevier
Date: 2020
Publisher: Inderscience Publishers
Date: 2013
Publisher: MDPI AG
Date: 06-12-2022
DOI: 10.3390/PR10122615
Abstract: Evolutionary Population Dynamics (EPD) refers to eliminating poor in iduals in nature, which is the opposite of survival of the fittest. Although this method can improve the median of the whole population of the meta-heuristic algorithms, it suffers from poor exploration capability to handle high-dimensional problems. This paper proposes a novel EPD operator to improve the search process. In other words, as the primary EPD mainly improves the fitness of the worst in iduals in the population, and hence we name it the Fitness-Based EPD (FB-EPD), our proposed EPD mainly improves the ersity of the best in iduals, and hence we name it the Diversity-Based EPD (DB-EPD). The proposed method is applied to the Grey Wolf Optimizer (GWO) and named DB-GWO-EPD. In this algorithm, the three most ersified in iduals are first identified at each iteration, and then half of the best-fitted in iduals are forced to be eliminated and repositioned around these ersified agents with equal probability. This process can free the merged best in iduals located in a closed populated region and transfer them to the ersified and, thus, less-densely populated regions in the search space. This approach is frequently employed to make the search agents explore the whole search space. The proposed DB-GWO-EPD is tested on 13 high-dimensional and shifted classical benchmark functions as well as 29 test problems included in the CEC2017 test suite, and four constrained engineering problems. The results obtained by the proposal upon implemented on the classical test problems are compared to GWO, FB-GWO-EPD, and four other popular and newly proposed optimization algorithms, including Aquila Optimizer (AO), Flow Direction Algorithm (FDA), Arithmetic Optimization Algorithm (AOA), and Gradient-based Optimizer (GBO). The experiments demonstrate the significant superiority of the proposed algorithm when applied to a majority of the test functions, recommending the application of the proposed EPD operator to any other meta-heuristic whenever decided to ameliorate their performance.
Publisher: Elsevier BV
Date: 11-2020
Publisher: Wiley
Date: 12-07-2012
Publisher: Elsevier BV
Date: 02-2016
Publisher: IEEE
Date: 12-2020
Publisher: MDPI AG
Date: 27-09-2022
DOI: 10.3390/ELECTRONICS11193075
Abstract: Chest and lung diseases are among the most serious chronic diseases in the world, and they occur as a result of factors such as smoking, air pollution, or bacterial infection, which would expose the respiratory system and chest to serious disorders. Chest diseases lead to a natural weakness in the respiratory system, which requires the patient to take care and attention to alleviate this problem. Countries are interested in encouraging medical research and monitoring the spread of communicable diseases. Therefore, they advised researchers to perform studies to curb the diseases’ spread and urged researchers to devise methods for swiftly and readily detecting and distinguishing lung diseases. In this paper, we propose a hybrid architecture of contrast-limited adaptive histogram equalization (CLAHE) and deep convolutional network for the classification of lung diseases. We used X-ray images to create a convolutional neural network (CNN) for early identification and categorization of lung diseases. Initially, the proposed method implemented the support vector machine to classify the images with and without using CLAHE equalizer. The obtained results were compared with the CNN networks. Later, two different experiments were implemented with hybrid architecture of deep CNN networks and CLAHE as a preprocessing for image enhancement. The experimental results indicate that the suggested hybrid architecture outperforms traditional methods by roughly 20% in terms of accuracy.
Publisher: Elsevier BV
Date: 06-2014
Publisher: Springer Science and Business Media LLC
Date: 30-08-2011
Publisher: Inderscience Publishers
Date: 2012
Publisher: Hindawi Limited
Date: 13-10-2022
DOI: 10.1002/STC.3112
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 06-2019
Publisher: Elsevier BV
Date: 04-2023
Publisher: American Society of Civil Engineers (ASCE)
Date: 09-2017
Publisher: Elsevier BV
Date: 03-2012
Publisher: Springer Science and Business Media LLC
Date: 04-02-2023
Publisher: Springer Science and Business Media LLC
Date: 14-07-2012
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2021
Publisher: Hindawi Limited
Date: 2018
DOI: 10.1155/2018/2740817
Abstract: This paper aims at developing new theory-driven biomarkers by implementing and evaluating novel techniques from resting-state scans that can be used in relapse prediction for nicotine-dependent patients and future treatment efficacy. Two classes of patients were studied. One class took the drug N-acetylcysteine and the other class took a placebo. Then, the patients underwent a double-blind smoking cessation treatment and the resting-state fMRI scans of their brains before and after treatment were recorded. The scientific research goal of this study was to interpret the fMRI connectivity maps based on machine learning algorithms to predict the patient who will relapse and the one who will not. In this regard, the feature matrix was extracted from the image slices of brain employing voxel selection schemes and data reduction algorithms. Then, the feature matrix was fed into the machine learning classifiers including optimized CART decision tree and Naive-Bayes classifier with standard and optimized implementation employing 10-fold cross-validation. Out of all the data reduction techniques and the machine learning algorithms employed, the best accuracy was obtained using the singular value decomposition along with the optimized Naive-Bayes classifier. This gave an accuracy of 93% with sensitivity-specificity of 99% which suggests that the relapse in nicotine-dependent patients can be predicted based on the resting-state fMRI images. The use of these approaches may result in clinical applications in the future.
Publisher: Elsevier BV
Date: 11-2009
Publisher: Elsevier BV
Date: 09-2022
DOI: 10.1016/J.COMPBIOMED.2022.105894
Abstract: Acute Lymphoblastic Leukemia (ALL) is cancer in which bone marrow overproduces undeveloped lymphocytes. Over 6500 cases of ALL are diagnosed every year in the United States in both adults and children, accounting for around 25% of pediatric cancers, and the trend continues to rise. With the advancements of AI and big data analytics, early diagnosis of ALL can be used to aid the clinical decisions of physicians and radiologists. This research proposes a deep neural network-based (ALNett) model that employs depth-wise convolution with different dilation rates to classify microscopic white blood cell images. Specifically, the cluster layers encompass convolution and max-pooling followed by a normalization process that provides enriched structural and contextual details to extract robust local and global features from the microscopic images for the accurate prediction of ALL. The performance of the model was compared with various pre-trained models, including VGG16, ResNet-50, GoogleNet, and AlexNet, based on precision, recall, accuracy, F1 score, loss accuracy, and receiver operating characteristic (ROC) curves. Experimental results showed that the proposed ALNett model yielded the highest classification accuracy of 91.13% and an F1 score of 0.96 with less computational complexity. ALNett demonstrated promising ALL categorization and outperformed the other pre-trained models.
Publisher: Elsevier BV
Date: 04-2022
Publisher: Elsevier BV
Date: 09-2021
Publisher: arXiv
Date: 2022
Publisher: Elsevier BV
Date: 09-2011
Publisher: Springer Berlin Heidelberg
Date: 2012
Publisher: MDPI AG
Date: 13-07-2021
DOI: 10.3390/ELECTRONICS10141670
Abstract: Cyberstalking is a growing anti-social problem being transformed on a large scale and in various forms. Cyberstalking detection has become increasingly popular in recent years and has technically been investigated by many researchers. However, cyberstalking victimization, an essential part of cyberstalking, has empirically received less attention from the paper community. This paper attempts to address this gap and develop a model to understand and estimate the prevalence of cyberstalking victimization. The model of this paper is produced using routine activities and lifestyle exposure theories and includes eight hypotheses. The data of this paper is collected from the 757 respondents in Jordanian universities. This review paper utilizes a quantitative approach and uses structural equation modeling for data analysis. The results revealed a modest prevalence range is more dependent on the cyberstalking type. The results also indicated that proximity to motivated offenders, suitable targets, and digital guardians significantly influences cyberstalking victimization. The outcome from moderation hypothesis testing demonstrated that age and residence have a significant effect on cyberstalking victimization. The proposed model is an essential element for assessing cyberstalking victimization among societies, which provides a valuable understanding of the prevalence of cyberstalking victimization. This can assist the researchers and practitioners for future research in the context of cyberstalking victimization.
Publisher: Springer Science and Business Media LLC
Date: 02-2018
Publisher: Hindawi Limited
Date: 2014
DOI: 10.1155/2014/474289
Abstract: This paper presents a new multigene genetic programming (MGGP) approach for estimation of elastic modulus of concrete. The MGGP technique models the elastic modulus behavior by integrating the capabilities of standard genetic programming and classical regression. The main aim is to derive precise relationships between the tangent elastic moduli of normal and high strength concrete and the corresponding compressive strength values. Another important contribution of this study is to develop a generalized prediction model for the elastic moduli of both normal and high strength concrete. Numerous concrete compressive strength test results are obtained from the literature to develop the models. A comprehensive comparative study is conducted to verify the performance of the models. The proposed models perform superior to the existing traditional models, as well as those derived using other powerful soft computing tools.
Publisher: Elsevier BV
Date: 2021
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 02-2010
Publisher: IEEE
Date: 06-2017
Publisher: Springer International Publishing
Date: 24-02-2012
Publisher: arXiv
Date: 2012
Publisher: Elsevier BV
Date: 07-2014
DOI: 10.1016/J.ISATRA.2014.03.018
Abstract: This paper presents the interior search algorithm (ISA) as a novel method for solving optimization tasks. The proposed ISA is inspired by interior design and decoration. The algorithm is different from other metaheuristic algorithms and provides new insight for global optimization. The proposed method is verified using some benchmark mathematical and engineering problems commonly used in the area of optimization. ISA results are further compared with well-known optimization algorithms. The results show that the ISA is efficiently capable of solving optimization problems. The proposed algorithm can outperform the other well-known algorithms. Further, the proposed algorithm is very simple and it only has one parameter to tune.
Publisher: Elsevier BV
Date: 2013
Publisher: Springer Science and Business Media LLC
Date: 02-07-2021
Publisher: Elsevier BV
Date: 04-2018
Publisher: Springer Science and Business Media LLC
Date: 05-2021
DOI: 10.1007/S42235-021-0050-Y
Abstract: This paper proposes a new stochastic optimizer called the Colony Predation Algorithm (CPA) based on the corporate predation of animals in nature. CPA utilizes a mathematical mapping following the strategies used by animal hunting groups, such as dispersing prey, encircling prey, supporting the most likely successful hunter, and seeking another target. Moreover, the proposed CPA introduces new features of a unique mathematical model that uses a success rate to adjust the strategy and simulate hunting animals’ selective abandonment behavior. This paper also presents a new way to deal with cross-border situations, whereby the optimal position value of a cross-border situation replaces the cross-border value to improve the algorithm’s exploitation ability. The proposed CPA was compared with state-of-the-art metaheuristics on a comprehensive set of benchmark functions for performance verification and on five classical engineering design problems to evaluate the algorithm’s efficacy in optimizing engineering problems. The results show that the proposed algorithm exhibits competitive, superior performance in different search landscapes over the other algorithms. Moreover, the source code of the CPA will be publicly available after publication.
Publisher: Elsevier BV
Date: 11-2017
Publisher: Springer Science and Business Media LLC
Date: 08-05-2020
Publisher: Elsevier BV
Date: 2022
DOI: 10.1016/J.COMPBIOMED.2021.105100
Abstract: Drug recall is a critical issue for manufacturing companies, as a manufacturer might face criticism and severe business downfall due to a defective drug. A defective drug is a highly detrimental issue, as it can cost several lives. Therefore, recalling the drug becomes one of the most sensitive issues in the pharmaceutical industry. This paper presents a blockchain-enabled network that allows manufacturers to effectively monitor a drug while in the supply chain with improved security and transparency throughout the process. The study also tries to minimize the cost and time sustained by the manufacturing company to transfer the drug to the end-user by proposing forward and backward supply chain mathematical models. Specifically, the forward chain model supports drug delivery from the manufacturer to the end-user in less time with a reliable transport mode. The backward supply chain model explicitly focuses on reducing the extra time and cost incurred to the manufacturer in pursuit of recalling the defective drug. Moreover, a real-time implementation of the proposed blockchain-enabled supply chain management system using the Hyperledger Composer is done to demonstrate the transparency of the process.
Publisher: Springer Science and Business Media LLC
Date: 29-09-2022
DOI: 10.1057/S41599-022-01325-Y
Abstract: The complexity of business decision-making has increased over the years. It is essential for managers to gain a confident understanding of their business environments in order to make successful decisions. With the growth of opinion-rich web resources such as social media, discussion forums, review sites, news corpora, and blogs available on the internet, product and service reviews have become an essential source of information. In a data-driven world, they will improve services and operational insights to achieve real business benefits and help enterprises remain competitive. Despite the prevalence of textual data, few studies have demonstrated the effectiveness of real-time text mining and reporting tools in firms and organizations. To address this aspect of decision-making, we have developed and evaluated an unsupervised learning system to automatically extract and classify topics and their emotion score in text streams. Data were collected from commercial websites, open-access databases, and social networks to train the model. In the experiment, the polarity score was quantified at four different levels: word, sentence, paragraph, and the entire text using Latent Dirichlet Allocation (LDA). Using subjective data mining, we demonstrate how to extract, summarize, and track various aspects of information from the Web and help traditional information retrieval (IR) systems to capture more information. An opinion tracking system presented by our model extracts subjective information, classifies them, and tracks opinions by utilizing location, time, and reviewers’ positions. Using the online-offline data collection technique, we can update the library topic in real-time to provide users with a market opinion tracker. For marketing or economic research, this approach may be useful. In the experiment, the new model is applied to a case study to demonstrate how the business process improves.
Publisher: Springer Science and Business Media LLC
Date: 08-03-2013
Publisher: Elsevier BV
Date: 2020
Publisher: Springer International Publishing
Date: 2020
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 15-01-2020
Publisher: American Society of Civil Engineers (ASCE)
Date: 2023
Publisher: Springer Science and Business Media LLC
Date: 04-07-2022
Publisher: Elsevier BV
Date: 07-2020
Publisher: ACM
Date: 23-04-2023
Publisher: Vilnius Gediminas Technical University
Date: 25-08-2015
DOI: 10.3846/13923730.2014.897986
Abstract: A new metaheuristic optimization algorithm, called Krill Herd (KH), has been recently proposed by Gandomi and Alavi (2012). In this study, KH is introduced for solving engineering optimization problems. For more verification, KH is applied to six design problems reported in the literature. Further, the performance of the KH algorithm is compared with that of various algorithms representative of the state-of-the-art in the area. The comparisons show that the results obtained by KH are better than the best solutions obtained by the existing methods.
Publisher: Emerald
Date: 24-02-2012
DOI: 10.1108/02644401211206043
Abstract: The purpose of this paper is to develop new constitutive models to predict the soil deformation moduli using multi expression programming (MEP). The soil deformation parameters formulated are secant (Es) and reloading (Er) moduli. MEP is a new branch of classical genetic programming. The models obtained using this method are developed upon a series of plate load tests conducted on different soil types. The best models are selected after developing and controlling several models with different combinations of the influencing parameters. The validation of the models is verified using several statistical criteria. For more verification, sensitivity and parametric analyses are carried out. The results indicate that the proposed models give precise estimations of the soil deformation moduli. The Es prediction model provides considerably better results than the model developed for Er. The Es formulation outperforms several empirical models found in the literature. The validation phases confirm the efficiency of the models for their general application to the soil moduli estimation. In general, the derived models are suitable for fine‐grained soils. These equations may be used by designers to check the general validity of the laboratory and field test results or to control the solutions developed by more in‐depth deterministic analyses.
Publisher: Springer Nature Singapore
Date: 2022
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 12-2020
Publisher: Wiley
Date: 17-02-2010
DOI: 10.1002/HYP.7511
Publisher: Springer Singapore
Date: 2017
Publisher: MDPI AG
Date: 14-12-2022
DOI: 10.3390/BDCC6040157
Abstract: Big data applications and analytics are vital in proposing ultimate strategic decisions. The existing literature emphasizes that big data applications and analytics can empower those who apply Big Data Analytics during the COVID-19 pandemic. This paper reviews the existing literature specializing in big data applications pre and peri-COVID-19. A comparison between Pre and Peri of the pandemic for using Big Data applications is presented. The comparison is expanded to four highly recognized industry fields: Healthcare, Education, Transportation, and Banking. A discussion on the effectiveness of the four major types of data analytics across the mentioned industries is highlighted. Hence, this paper provides an illustrative description of the importance of big data applications in the era of COVID-19, as well as aligning the applications to their relevant big data analytics models. This review paper concludes that applying the ultimate big data applications and their associated data analytics models can harness the significant limitations faced by organizations during one of the most fateful pandemics worldwide. Future work will conduct a systematic literature review and a comparative analysis of the existing Big Data Systems and models. Moreover, future work will investigate the critical challenges of Big Data Analytics and applications during the COVID-19 pandemic.
Publisher: Springer Science and Business Media LLC
Date: 24-01-2022
Publisher: arXiv
Date: 2022
Publisher: Elsevier BV
Date: 04-2010
Publisher: Elsevier
Date: 2020
Publisher: Informa UK Limited
Date: 14-11-2022
Publisher: Elsevier BV
Date: 09-2021
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2022
Publisher: Springer International Publishing
Date: 10-10-2017
Publisher: Mathematical Sciences Publishers
Date: 03-12-2010
Publisher: Elsevier BV
Date: 02-2023
Publisher: Springer Science and Business Media LLC
Date: 20-07-2012
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 09-2023
Publisher: IEEE
Date: 18-07-2022
Publisher: Springer Berlin Heidelberg
Date: 2011
Publisher: Research Square Platform LLC
Date: 09-08-2022
DOI: 10.21203/RS.3.RS-1929133/V1
Abstract: The next release problem (NRP) refers to implementing the next release of software in the software industry regarding the expected revenues specifically, constraints like limited budgets indicate that the total cost corresponding to the next software release should be minimized. This paper uses and investigates the comparative performance of nineteen state-of-the-art evolutionary multi-objective algorithms, including NSGA-II, rNSGA-II, NSGA-III, MOEAD, EFRRR, tDEA, KnEA, MOMBIII, SPEA2, RVEA, NNIA, HypE, ANSGA-III, BiGE, GrEA, IDBEA, SPEAR, SPEA2SDE, and MOPSO, that can tackle this problem. The problem was designed to maximize the customer satisfaction and minimize the total required cost. Three indicators, namely hyper-volume (HV), spread, and runtime, were examined to compare the algorithms. Two types of datasets, i.e., classic and realistic data, from small to large scale were also examined to verify the applicability of the results. Overall, NSGA-II exhibited the best CPU run time in all test scales, and, also, the results shows that the HV and spread values of 1 st and 2 nd best algorithms (NNIA and SPEAR), for which most HV values for NNIA are bigger 0.708 and smaller than 1, while the HV values for SPEAR vary between 0.706 and 0.708. Finally, the conclusion and direction for future works are discussed.
Publisher: American Society of Civil Engineers (ASCE)
Date: 09-2018
Publisher: Elsevier
Date: 2013
Publisher: Elsevier BV
Date: 10-2015
Publisher: Elsevier BV
Date: 10-2020
Publisher: Springer Science and Business Media LLC
Date: 13-09-2012
Publisher: Elsevier BV
Date: 12-2012
Publisher: Elsevier BV
Date: 10-2023
Publisher: Springer Science and Business Media LLC
Date: 27-12-2023
DOI: 10.1007/S00158-022-03435-2
Abstract: To solve complex real-world problems, heuristics and concept-based approaches can be used to incorporate information into the problem. In this study, a concept-based approach called variable functioning ( Fx ) is introduced to reduce the optimization variables and narrow down the search space. In this method, the relationships among one or more subsets of variables are defined with functions using information prior to optimization thus, the function variables are optimized instead of modifying the variables in the search process. By using the problem structure analysis technique and engineering expert knowledge, the Fx method is used to enhance the steel frame design optimization process as a complex real-world problem. Herein, the proposed approach was coupled with particle swarm optimization and differential evolution algorithms then applied for three case studies. The algorithms are applied to optimize the case studies by considering the relationships among column cross-section areas. The results show that Fx can significantly improve both the convergence rate and the final design of a frame structure, even if it is only used for seeding.
Publisher: SAGE Publications
Date: 2014
DOI: 10.1155/2014/843730
Publisher: Elsevier BV
Date: 2022
Publisher: Elsevier
Date: 2021
Publisher: Springer Science and Business Media LLC
Date: 25-04-2023
Publisher: Elsevier BV
Date: 07-2011
Publisher: Elsevier BV
Date: 06-2022
DOI: 10.1016/J.COMPBIOMED.2022.105458
Abstract: Applications of machine learning (ML) methods have been used extensively to solve various complex challenges in recent years in various application areas, such as medical, financial, environmental, marketing, security, and industrial applications. ML methods are characterized by their ability to examine many data and discover exciting relationships, provide interpretation, and identify patterns. ML can help enhance the reliability, performance, predictability, and accuracy of diagnostic systems for many diseases. This survey provides a comprehensive review of the use of ML in the medical field highlighting standard technologies and how they affect medical diagnosis. Five major medical applications are deeply discussed, focusing on adapting the ML models to solve the problems in cancer, medical chemistry, brain, medical imaging, and wearable sensors. Finally, this survey provides valuable references and guidance for researchers, practitioners, and decision-makers framing future research and development directions.
Publisher: Elsevier BV
Date: 03-2011
Publisher: Wiley
Date: 22-01-2022
DOI: 10.1111/JCMM.17098
Abstract: There is an unmet need of models for early prediction of morbidity and mortality of Coronavirus disease‐19 (COVID‐19). We aimed to a) identify complement‐related genetic variants associated with the clinical outcomes of ICU hospitalization and death, b) develop an artificial neural network (ANN) predicting these outcomes and c) validate whether complement‐related variants are associated with an impaired complement phenotype. We prospectively recruited consecutive adult patients of Caucasian origin, hospitalized due to COVID‐19. Through targeted next‐generation sequencing, we identified variants in complement factor H/ CFH , CFB , CFH ‐ related , CFD , CD55 , C3 , C5 , CFI , CD46 , thrombomodulin / THBD , and A Disintegrin and Metalloproteinase with Thrombospondin motifs (ADAMTS13) . Among 381 variants in 133 patients, we identified 5 critical variants associated with severe COVID‐19: rs2547438 ( C3 ), rs2250656 ( C3 ), rs1042580 ( THBD ), rs800292 ( CFH ) and rs414628 ( CFHR1 ). Using age, gender and presence or absence of each variant, we developed an ANN predicting morbidity and mortality in 89.47% of the examined population. Furthermore, THBD and C3a levels were significantly increased in severe COVID‐19 patients and those harbouring relevant variants. Thus, we reveal for the first time an ANN accurately predicting ICU hospitalization and death in COVID‐19 patients, based on genetic variants in complement genes, age and gender. Importantly, we confirm that genetic dysregulation is associated with impaired complement phenotype.
Publisher: Elsevier BV
Date: 02-2022
Publisher: Elsevier BV
Date: 11-2019
DOI: 10.1016/J.NEUNET.2019.07.021
Abstract: This paper proposes a new personalised prognostic/diagnostic system that supports classification, prediction and pattern recognition when both static and dynamic/spatiotemporal features are presented in a dataset. The system is based on a proposed clustering method (named d2WKNN) for optimal selection of neighbouring s les to an in idual with respect to the integration of both static (vector-based) and temporal in idual data. The most relevant s les to an in idual are selected to train a Personalised Spiking Neural Network (PSNN) that learns from sets of streaming data to capture the space and time association patterns. The generated time-dependant patterns resulted in a higher accuracy of classification rediction (80% to 93%) when compared with global modelling and conventional methods. In addition, the PSNN models can support interpretability by creating personalised profiling of an in idual. This contributes to a better understanding of the interactions between features. Therefore, an end-user can comprehend what interactions in the model have led to a certain decision (outcome). The proposed PSNN model is an analytical tool, applicable to several real-life health applications, where different data domains describe a person's health condition. The system was applied to two case studies: (1) classification of spatiotemporal neuroimaging data for the investigation of in idual response to treatment and (2) prediction of risk of stroke with respect to temporal environmental data. For both datasets, besides the temporal data, static health data were also available. The hyper-parameters of the proposed system, including the PSNN models and the d2WKNN clustering parameters, are optimised for each in idual.
Publisher: Elsevier BV
Date: 09-2021
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 03-2020
Publisher: Elsevier BV
Date: 12-2022
DOI: 10.1016/J.ENVRES.2022.114286
Abstract: Due to the implications of poly- and perfluoroalkyl substances (PFAS) on the environment and public health, great attention has been recently made to finding innovative materials and methods for PFAS removal. In this work, PFAS is considered universal contamination which can be found in many wastewater streams. Conventional materials and processes used to remove and degrade PFAS do not have enough competence to address the issue particularly when it comes to eliminating short-chain PFAS. This is mainly due to the large number of complex parameters that are involved in both material and process designs. Here, we took the advantage of artificial intelligence to introduce a model (XGBoost) in which material and process factors are considered simultaneously. This research applies a machine learning approach using data collected from reported articles to predict the PFAS removal factors. The XGBoost modeling provided accurate adsorption capacity, equilibrium, and removal estimates with the ability to predict the adsorption mechanisms. The performance comparison of adsorbents and the role of AI in one dominant are studied and reviewed for the first time, even though many studies have been carried out to develop PFAS removal through various adsorption methods such as ion exchange, nanofiltration, and activated carbon (AC). The model showed that pH is the most effective parameter to predict PFAS removal. The proposed model in this work can be extended for other micropollutants and can be used as a basic framework for future adsorbent design and process optimization.
Publisher: Elsevier BV
Date: 08-2020
Publisher: MDPI AG
Date: 20-06-2022
DOI: 10.3390/ELECTRONICS11121919
Abstract: The Harris hawk optimizer is a recent population-based metaheuristics algorithm that simulates the hunting behavior of hawks. This swarm-based optimizer performs the optimization procedure using a novel way of exploration and exploitation and the multiphases of search. In this review research, we focused on the applications and developments of the recent well-established robust optimizer Harris hawk optimizer (HHO) as one of the most popular swarm-based techniques of 2020. Moreover, several experiments were carried out to prove the powerfulness and effectivness of HHO compared with nine other state-of-art algorithms using Congress on Evolutionary Computation (CEC2005) and CEC2017. The literature review paper includes deep insight about possible future directions and possible ideas worth investigations regarding the new variants of the HHO algorithm and its widespread applications.
Publisher: MDPI AG
Date: 02-11-2022
Abstract: Machine learning techniques play a significant role in agricultural applications for computerized grading and quality evaluation of fruits. In the agricultural domain, automation improves the quality, productivity, and economic growth of a country. The quality grading of fruits is an essential measure in the export market, especially defect detection of a fruit’s surface. This is especially pertinent for mangoes, which are highly popular in India. However, the manual grading of mango is a time-consuming, inconsistent, and subjective process. Therefore, a computer-assisted grading system has been developed for defect detection in mangoes. Recently, machine learning techniques, such as the deep learning method, have been used to achieve efficient classification results in digital image classification. Specifically, the convolution neural network (CNN) is a deep learning technique that is employed for automated defect detection in mangoes. This study proposes a computer-vision system, which employs CNN, for the classification of quality mangoes. After training and testing the system using a publicly available mango database, the experimental results show that the proposed method acquired an accuracy of 98%.
Publisher: Springer Science and Business Media LLC
Date: 09-2013
Publisher: Springer Science and Business Media LLC
Date: 27-05-2021
Publisher: Authorea, Inc.
Date: 06-05-2020
Publisher: American Society of Civil Engineers (ASCE)
Date: 03-2011
Publisher: Center for Open Science
Date: 16-10-2020
Abstract: This paper provides a state-of-the-art investigation of advances in data science in emerging economic applications. The analysis was performed on novel data science methods in four in idual classes of deep learning models, hybrid deep learning models, hybrid machine learning, and ensemble models. Application domains include a wide and erse range of economics research from the stock market, marketing, and e-commerce to corporate banking and cryptocurrency. Prisma method, a systematic literature review methodology, was used to ensure the quality of the survey. The findings reveal that the trends follow the advancement of hybrid models, which, based on the accuracy metric, outperform other learning algorithms. It is further expected that the trends will converge toward the advancements of sophisticated hybrid deep learning models.
Publisher: Elsevier BV
Date: 03-2016
Publisher: MDPI AG
Date: 13-10-2020
DOI: 10.20944/PREPRINTS202010.0263.V1
Abstract: This paper provides the state of the art of data science in economics. Through a novel taxonomy of applications and methods advances in data science are investigated. The data science advances are investigated in three in idual classes of deep learning models, ensemble models, and hybrid models. Application domains include stock market, marketing, E-commerce, corporate banking, and cryptocurrency. Prisma method, a systematic literature review methodology is used to ensure the quality of the survey. The findings revealed that the trends are on advancement of hybrid models as more than 51% of the reviewed articles applied hybrid model. On the other hand, it is found that based on the RMSE accuracy metric, hybrid models had higher prediction accuracy than other algorithms. While it is expected the trends go toward the advancements of deep learning models.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 12-2019
Publisher: Emerald
Date: 13-07-2012
DOI: 10.1108/02644401211235834
Abstract: Nature‐inspired algorithms are among the most powerful algorithms for optimization. The purpose of this paper is to introduce a new nature‐inspired metaheuristic optimization algorithm, called bat algorithm (BA), for solving engineering optimization tasks. The proposed BA is based on the echolocation behavior of bats. After a detailed formulation and explanation of its implementation, BA is verified using eight nonlinear engineering optimization problems reported in the specialized literature. BA has been carefully implemented and carried out optimization for eight well‐known optimization tasks then a comparison has been made between the proposed algorithm and other existing algorithms. The optimal solutions obtained by the proposed algorithm are better than the best solutions obtained by the existing methods. The unique search features used in BA are analyzed, and their implications for future research are also discussed in detail.
Publisher: Elsevier BV
Date: 07-2019
Publisher: American Society of Civil Engineers (ASCE)
Date: 07-2021
Publisher: Elsevier BV
Date: 09-2019
Publisher: Elsevier BV
Date: 02-2022
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 15-08-2021
Publisher: MDPI AG
Date: 06-01-2023
DOI: 10.3390/APP13020804
Abstract: Traditional watermarking methods can remove a watermark from an image, making it possible to see the copyright information about the image owner or to estimate similarities using techniques such as bit error rate and normalized correlation. Deep learning is another examination field in AI, and is utilized to develop a deep network to extract objective elements and afterwards distinguish the general environment. To assure the robustness and security of computerized image watermarking, we propose a novel algorithm using convolutional generative adversarial neural networks. This research proposed a novel technique in digital watermarking, with data hiding based on segmentation and classification, using deep learning techniques. The used input images are medical images, including Magnetic Resonance Images (MRI) and Computed Tomography (CT) images, which have been processed for noise removal, smoothening and normalization. The processed image has been watermarked using the Singular Value Decomposition-based discrete wavelet transform quantization model, being segmented and classified using convolutional generative adversarial neural networks. The experimental analysis has been carried out in terms of bit error rate, Structural Similarity Index Measure (SSIM), Normalized Cross-Correlation (NCC), training accuracy, and validation accuracy. This achieved an attained bit error rate of 71%, an SSIM of 56%, a Normalized Cross-Correlation of 71%, a training accuracy of 98%, and a validation accuracy of 95%.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2020
Publisher: Elsevier BV
Date: 06-2023
Publisher: Research Square Platform LLC
Date: 07-11-2022
DOI: 10.21203/RS.3.RS-1927871/V1
Abstract: This manuscript introduces a new socio-inspired metaheuristic technique referred to as Leader-Advocate-Believer based optimization algorithm (LAB) for engineering and global optimization problems. The proposed algorithm is inspired by the AI-based competitive behaviour exhibited by the in iduals in a group while simultaneously improving themselves and establishing a role (Leader, Advocate, Believer). LAB performance in computational time and function evaluations are benchmarked using other metaheuristic algorithms. The LAB algorithm was applied for solving engineering problems, including abrasive water jet machining, electric discharge machining, micromachining processes and turning of titanium alloy in a minimum quantity lubrication environment. The results were compared with other algorithms such as Firefly Algorithm, Cohort Intelligence, Genetic Algorithm, Simulated Annealing, Particle Swarm Optimisation and Multi-Cohort Intelligence. The results from this study highlighted that the LAB outperforms the other algorithms in terms of function evaluations and computational time. The prominent features and limitations of the LAB algorithm are also discussed.
Publisher: World Scientific Pub Co Pte Lt
Date: 04-2016
DOI: 10.1142/S021821301550030X
Abstract: A multi-stage krill herd (MSKH) algorithm is presented to fully exploit the global and local search abilities of the standard krill herd (KH) optimization method. The proposed method involves exploration and exploitation stages. The exploration stage uses the basic KH algorithm to select a good candidate solution set. This phase is followed by fine-tuning a good candidate solution in the exploitation stage with a focused local mutation and crossover (LMC) operator in order to enhance the reliability of the method for solving global numerical optimization problems. Moreover, the elitism scheme is introduced into the MSKH method to guarantee the best solution. The performance of MSKH is verified using twenty-five standard and rotated and shifted benchmark problems. The results show the superiority of the proposed algorithm to the standard KH and other well-known optimization methods.
Publisher: MDPI AG
Date: 23-08-2022
DOI: 10.3390/ELECTRONICS11172634
Abstract: Tuberculosis (TB) is an infectious disease that has been a major menace to human health globally, causing millions of deaths yearly. Well-timed diagnosis and treatment are an arch to full recovery of the patient. Computer-aided diagnosis (CAD) has been a hopeful choice for TB diagnosis. Many CAD approaches using machine learning have been applied for TB diagnosis, specific to the artificial intelligence (AI) domain, which has led to the resurgence of AI in the medical field. Deep learning (DL), a major branch of AI, provides bigger room for diagnosing deadly TB disease. This review is focused on the limitations of conventional TB diagnostics and a broad description of various machine learning algorithms and their applications in TB diagnosis. Furthermore, various deep learning methods integrated with other systems such as neuro-fuzzy logic, genetic algorithm, and artificial immune systems are discussed. Finally, multiple state-of-the-art tools such as CAD4TB, Lunit INSIGHT, qXR, and InferRead DR Chest are summarized to view AI-assisted future aspects in TB diagnosis.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 05-2023
Publisher: IEEE
Date: 04-12-2022
Publisher: Elsevier BV
Date: 2016
Publisher: Springer International Publishing
Date: 2021
Publisher: Elsevier
Date: 2022
Publisher: IEEE
Date: 12-2020
Publisher: Informa UK Limited
Date: 12-05-2023
Publisher: IEEE
Date: 26-11-2021
Publisher: MDPI AG
Date: 04-03-2021
DOI: 10.3390/ELECTRONICS10050593
Abstract: The most successful Machine Learning (ML) systems remain complex black boxes to end-users, and even experts are often unable to understand the rationale behind their decisions. The lack of transparency of such systems can have severe consequences or poor uses of limited valuable resources in medical diagnosis, financial decision-making, and in other high-stake domains. Therefore, the issue of ML explanation has experienced a surge in interest from the research community to application domains. While numerous explanation methods have been explored, there is a need for evaluations to quantify the quality of explanation methods to determine whether and to what extent the offered explainability achieves the defined objective, and compare available explanation methods and suggest the best explanation from the comparison for a specific task. This survey paper presents a comprehensive overview of methods proposed in the current literature for the evaluation of ML explanations. We identify properties of explainability from the review of definitions of explainability. The identified properties of explainability are used as objectives that evaluation metrics should achieve. The survey found that the quantitative metrics for both model-based and ex le-based explanations are primarily used to evaluate the parsimony/simplicity of interpretability, while the quantitative metrics for attribution-based explanations are primarily used to evaluate the soundness of fidelity of explainability. The survey also demonstrated that subjective measures, such as trust and confidence, have been embraced as the focal point for the human-centered evaluation of explainable systems. The paper concludes that the evaluation of ML explanations is a multidisciplinary research topic. It is also not possible to define an implementation of evaluation metrics, which can be applied to all explanation methods.
Publisher: MDPI AG
Date: 08-03-2022
DOI: 10.3390/BDCC6010029
Abstract: Plenty of disease types exist in world communities that can be explained by humans’ lifestyles or the economic, social, genetic, and other factors of the country of residence. Recently, most research has focused on studying common diseases in the population to reduce death risks, take the best procedure for treatment, and enhance the healthcare level of the communities. Kidney Disease is one of the common diseases that have affected our societies. Sectionicularly Kidney Tumors (KT) are the 10th most prevalent tumor for men and women worldwide. Overall, the lifetime likelihood of developing a kidney tumor for males is about 1 in 466 (2.02 percent) and it is around 1 in 80 (1.03 percent) for females. Still, more research is needed on new diagnostic, early, and innovative methods regarding finding an appropriate treatment method for KT. Compared to the tedious and time-consuming traditional diagnosis, automatic detection algorithms of machine learning can save diagnosis time, improve test accuracy, and reduce costs. Previous studies have shown that deep learning can play a role in dealing with complex tasks, diagnosis and segmentation, and classification of Kidney Tumors, one of the most malignant tumors. The goals of this review article on deep learning in radiology imaging are to summarize what has already been accomplished, determine the techniques used by the researchers in previous years in diagnosing Kidney Tumors through medical imaging, and identify some promising future avenues, whether in terms of applications or technological developments, as well as identifying common problems, describing ways to expand the data set, summarizing the knowledge and best practices, and determining remaining challenges and future directions.
Publisher: Springer Science and Business Media LLC
Date: 11-05-2021
Publisher: ACM
Date: 07-07-2021
Publisher: Springer Science and Business Media LLC
Date: 14-06-2019
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 12-2021
Publisher: American Society of Civil Engineers (ASCE)
Date: 09-2022
Publisher: Elsevier BV
Date: 10-2008
Publisher: Elsevier BV
Date: 06-2011
Publisher: MDPI AG
Date: 19-12-2021
DOI: 10.3390/ELECTRONICS10243167
Abstract: Real medical datasets usually consist of missing data with different patterns which decrease the performance of classifiers used in intelligent healthcare and disease diagnosis systems. Many methods have been proposed to impute missing data, however, they do not fulfill the need for data quality especially in real datasets with different missing data patterns. In this paper, a four-layer model is introduced, and then a hybrid imputation (HIMP) method using this model is proposed to impute multi-pattern missing data including non-random, random, and completely random patterns. In HIMP, first, non-random missing data patterns are imputed, and then the obtained dataset is decomposed into two datasets containing random and completely random missing data patterns. Then, concerning the missing data patterns in each dataset, different single or multiple imputation methods are used. Finally, the best-imputed datasets gained from random and completely random patterns are merged to form the final dataset. The experimental evaluation was conducted by a real dataset named IRDia including all three missing data patterns. The proposed method and comparative methods were compared using different classifiers in terms of accuracy, precision, recall, and F1-score. The classifiers’ performances show that the HIMP can impute multi-pattern missing values more effectively than other comparative methods.
Publisher: SPIE
Date: 09-05-2018
DOI: 10.1117/12.2306596
Publisher: Elsevier BV
Date: 07-2017
Publisher: MDPI AG
Date: 02-12-2021
DOI: 10.3390/APP112311423
Abstract: The COVID-19 pandemic has claimed the lives of millions of people and put a significant strain on healthcare facilities. To combat this disease, it is necessary to monitor affected patients in a timely and cost-effective manner. In this work, CXR images were used to identify COVID-19 patients. We compiled a CXR dataset with equal number of 2313 COVID positive, pneumonia and normal CXR images and utilized various transfer learning models as base classifiers, including VGG16, GoogleNet, and Xception. The proposed methodology combines fuzzy ensemble techniques, such as Majority Voting, Sugeno Integral, and Choquet Fuzzy, and adaptively combines the decision scores of the transfer learning models to identify coronavirus infection from CXR images. The proposed fuzzy ensemble methods outperformed each in idual transfer learning technique and several state-of-the-art ensemble techniques in terms of accuracy and prediction. Specifically, VGG16 + Choquet Fuzzy, GoogleNet + Choquet Fuzzy, and Xception + Choquet Fuzzy achieved accuracies of 97.04%, 98.48%, and 99.57%, respectively. The results of this work are intended to help medical practitioners achieve an earlier detection of coronavirus compared to other detection strategies, which can further save millions of lives and advantageously influence society.
Publisher: MDPI AG
Date: 11-05-2022
DOI: 10.3390/ELECTRONICS11101541
Abstract: Wireless sensor networks (WSNs) comprise several cooperating sensor nodes capable of sensing, computing, and transmitting sensed signals to a central server. This research proposes a sensor system-based network with low power communication using swarm intelligence integrated with multi-hop communication (SIMHC). This routing protocol selects the optimal route based on link distance, transmission power, and residual energy to optimize the network lifetime and node energy efficiency. Moreover, adaptive clustering-based locative data transmission (ACLDT) is applied for optimizing data transmission. The proposed approach combines clustering with data transfer via location-based routing and low-power communication in two phases to calculate the ideal cluster heads (CHs). First, a CH seeks the next hop from the nearest CH. Then, a path to the base station is formed by developing CH chains. The results reveal that the proposed sensor system based on data transmission and low-power consumption achieved a network lifetime of 96%, an average delay of 53 ms, a coverage rate (CR) of 83%, a throughput of 97%, and energy efficiency of 95%.
Publisher: Elsevier BV
Date: 11-2021
Publisher: Springer Science and Business Media LLC
Date: 30-10-2009
Publisher: Informa UK Limited
Date: 07-04-2011
Publisher: arXiv
Date: 2022
Publisher: Springer Science and Business Media LLC
Date: 09-2009
Publisher: Springer Science and Business Media LLC
Date: 23-03-2022
DOI: 10.1007/S11356-022-19620-1
Abstract: Groundwater management is essential in water and environmental engineering from both quantity and quality aspects due to the growing urban population. Groundwater vulnerability evaluation models play a prominent role in groundwater resource management, such as the DRASTIC model that has been used successfully in numerous areas. Several studies have focused on improving this model by changing the initial parameters or the rates and weights. The presented study investigated results produced by the DRASTIC model by simultaneously exerting both modifications. For this purpose, two land use-based DRASTIC-derived models, DRASTICA and susceptibility index (SI), were implemented in the Shiraz plain, Iran, a semi-arid region and the primary resource of groundwater currently struggling with groundwater pollution. To develop the novel proposed framework for the progressive improvement of the mentioned rating-based techniques, three main calculation steps for rates and weights are presented: (1) original rates and weights (2) modified rates by Wilcoxon tests and original weights and (3) adjusted rates and optimized weights using the genetic algorithm (GA) and particle swarm optimization (PSO) algorithms. To validate the results of this framework applied to the case study, the concentrations of three contamination pollutants, NO
Publisher: MDPI AG
Date: 28-12-2021
DOI: 10.3390/BDCC6010002
Abstract: Neuroimaging refers to the techniques that provide efficient information about the neural structure of the human brain, which is utilized for diagnosis, treatment, and scientific research. The problem of classifying neuroimages is one of the most important steps that are needed by medical staff to diagnose their patients early by investigating the indicators of different neuroimaging types. Early diagnosis of Alzheimer’s disease is of great importance in preventing the deterioration of the patient’s situation. In this research, a novel approach was devised based on a digital subtracted angiogram scan that provides sufficient features of a new biomarker cerebral blood flow. The used dataset was acquired from the database of K.A.U.H hospital and contains digital subtracted angiograms of participants who were diagnosed with Alzheimer’s disease, besides s les of normal controls. Since each scan included multiple frames for the left and right ICA’s, pre-processing steps were applied to make the dataset prepared for the next stages of feature extraction and classification. The multiple frames of scans transformed from real space into DCT space and averaged to remove noises. Then, the averaged image was transformed back to the real space, and both sides filtered with Meijering and concatenated in a single image. The proposed model extracts the features using different pre-trained models: InceptionV3 and DenseNet201. Then, the PCA method was utilized to select the features with 0.99 explained variance ratio, where the combination of selected features from both pre-trained models is fed into machine learning classifiers. Overall, the obtained experimental results are at least as good as other state-of-the-art approaches in the literature and more efficient according to the recent medical standards with a 99.14% level of accuracy, considering the difference in dataset s les and the used cerebral blood flow biomarker.
Publisher: Elsevier BV
Date: 11-2018
Publisher: American Society of Civil Engineers (ASCE)
Date: 12-2015
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 03-2023
Publisher: Elsevier BV
Date: 10-2022
Publisher: Elsevier
Date: 2020
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 15-08-2023
Publisher: IOP Publishing
Date: 25-03-2013
Publisher: Springer Science and Business Media LLC
Date: 09-08-2015
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2023
Publisher: Elsevier BV
Date: 2021
Publisher: Wiley
Date: 29-09-2020
DOI: 10.1002/AIL2.9
Publisher: Civil-Comp Press
Date: 2015
DOI: 10.4203/CCP.109.16
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2023
Publisher: IEEE
Date: 05-12-2021
Publisher: Elsevier BV
Date: 07-2023
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 02-2021
Publisher: Elsevier BV
Date: 08-2017
Publisher: Elsevier
Date: 2020
Publisher: IEEE
Date: 09-2016
Publisher: IEEE
Date: 12-2020
Publisher: Center for Open Science
Date: 15-10-2020
Abstract: This paper provides a state-of-the-art investigation of advances in data science in emerging economic applications. The analysis was performed on novel data science methods in four in idual classes of deep learning models, hybrid deep learning models, hybrid machine learning, and ensemble models. Application domains include a wide and erse range of economics research from the stock market, marketing, and e-commerce to corporate banking and cryptocurrency. Prisma method, a systematic literature review methodology, was used to ensure the quality of the survey. The findings reveal that the trends follow the advancement of hybrid models, which, based on the accuracy metric, outperform other learning algorithms. It is further expected that the trends will converge toward the advancements of sophisticated hybrid deep learning models.
Publisher: IEEE
Date: 12-2020
Publisher: MDPI AG
Date: 18-01-2023
DOI: 10.3390/INFO14020062
Abstract: Transportation agencies are primarily responsible for building new roads and maintaining current roads. The main focuses of these agencies are to prioritize maintenance and make significant rehabilitation decisions to handle serious problems facing road authorities. Considerable efforts and an abundance of studies have been performed to determine the nature, mechanisms, test methods, and measurement of pavements for preservation and improvements of roadways. The presented study reports a state-of-the-art review on recent advances in the application of artificial intelligence in various steps of flexible pavement, including pavement construction, performance, cost, and maintenance. Herein, the challenges of gathering large amounts of data, parameter optimization, portability, and low-cost data annotating are discussed. According to the findings, it is suggested that greater attention should be paid to integrating multidisciplinary roadway engineering techniques to address existing challenges and opportunities in the future.
Publisher: Elsevier BV
Date: 08-2009
Publisher: Springer Singapore
Date: 2017
Publisher: Elsevier BV
Date: 11-2020
Publisher: Elsevier BV
Date: 2016
Publisher: Elsevier BV
Date: 06-2022
Publisher: Research Square Platform LLC
Date: 29-09-2020
DOI: 10.21203/RS.3.RS-83965/V1
Abstract: The Novel coronavirus (COVID-19) has distributed to more than 200 territory worldwide leading to about 24 million confirmed cases as of August 25, 2020. Several models have been released that forecast the outbreak globally. This work presents a review of the most important forecasting models against COVID-19 and shows a short analysis of each one. The work presented in this study possesses two parts. A detailed scientometric analysis was done in the first section that provides an influential tool for describing bibliometric analyses. The analysis was performed on data corresponding to COVID-19 using the Scopus and Web of Science databases. For analysis, keywords and subject areas were addressed while classification of forecasting models, criteria evaluation and comparison of solution approaches were done in the second section of the work. Conclusion and discussion are provided as the final sections of this study.
Publisher: Elsevier BV
Date: 12-2013
Publisher: Elsevier BV
Date: 11-2021
Publisher: MDPI AG
Date: 26-02-2021
DOI: 10.3390/ELECTRONICS10050554
Abstract: Data are presently being produced at an increased speed in different formats, which complicates the design, processing, and evaluation of the data. The MapReduce algorithm is a distributed file system that is used for big data parallel processing. Current implementations of MapReduce assist in data locality along with robustness. In this study, a linear weighted regression and energy-aware greedy scheduling (LWR-EGS) method were combined to handle big data. The LWR-EGS method initially selects tasks for an assignment and then selects the best available machine to identify an optimal solution. With this objective, first, the problem was modeled as an integer linear weighted regression program to choose tasks for the assignment. Then, the best available machines were selected to find the optimal solution. In this manner, the optimization of resources is said to have taken place. Then, an energy efficiency-aware greedy scheduling algorithm was presented to select a position for each task to minimize the total energy consumption of the MapReduce job for big data applications in heterogeneous environments without a significant performance loss. To evaluate the performance, the LWR-EGS method was compared with two related approaches via MapReduce. The experimental results showed that the LWR-EGS method effectively reduced the total energy consumption without producing large scheduling overheads. Moreover, the method also reduced the execution time when compared to state-of-the-art methods. The LWR-EGS method reduced the energy consumption, average processing time, and scheduling overhead by 16%, 20%, and 22%, respectively, compared to existing methods.
Publisher: Elsevier BV
Date: 07-2021
Publisher: Elsevier
Date: 2023
Publisher: Elsevier
Date: 2013
Publisher: Springer Science and Business Media LLC
Date: 05-2011
Publisher: Elsevier BV
Date: 09-2021
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2022
Publisher: Elsevier BV
Date: 08-2021
DOI: 10.1016/J.JENVMAN.2021.112807
Abstract: Groundwater level drawdown changes the hydrological cycle and poses challenges such as land subsidence and reduction of the groundwater quality. In this study, a new approach using a simulation-optimization framework was developed for shared groundwater management under water bankruptcy conditions (where water demand is greater than the allowable discharge capacity of water resources). The novelty of this study lies in using bankruptcy rules and a game model to manage a bankrupted shared groundwater resource considering aquifer sustainability. Accordingly, groundwater flow in the aquifer was numerically simulated by a finite-differences model (MODFLOW). Then, the repeated performance code of the finite-differences model was run for different discharge scenarios, and the results were applied to develop an MLP-ANN meta-model. By coupling the meta-model with a non-dominated sorting genetic algorithm II (NSGA-II)-based multi-objective optimization model, an optimized cultivation pattern under water bankruptcy conditions was achieved. Then, six different bankruptcy methods were utilized to specify groundwater allocation between three stakeholders. To manage the water bankruptcy conditions, different scenarios considering various groundwater extraction rates and cultivation areas were defined, and the optimization model was recoded for each scenario to find the corresponding optimized cultivation pattern. To consider the competition between stakeholders for groundwater extraction, a non-cooperative 3-player game was applied to achieve a compromise for different combinations of management strategies, which maximizes the profit and yields the best cultivation scenario. Applicability of the proposed methodology was investigated in an aquifer system located in Golestan Province, Iran, including three regions (Minudasht, Azadshahr, and Gonbade-kavus). Results show that the proposed method is capable of managing the bankruptcy conditions by generating greater agricultural profit and reducing groundwater drawdowns.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 07-2023
Publisher: Informa UK Limited
Date: 09-10-2014
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2019
Publisher: Informa UK Limited
Date: 07-07-2022
Publisher: Elsevier BV
Date: 10-2022
Publisher: Wiley
Date: 06-2010
Publisher: MDPI AG
Date: 03-10-2020
DOI: 10.3390/ELECTRONICS9101630
Abstract: In today’s sensor network research, numerous technologies are used for the enhancement of earlier studies that focused on cost-effectiveness in addition to time-saving and novel approaches. This survey presents complete details about those earlier models and their research gaps. In general, clustering is focused on managing the energy factors in wireless sensor networks (WSNs). In this study, we primarily concentrated on multihop routing in a clustering environment. Our study was classified according to cluster-related parameters and properties and is sub ided into three approach categories: (1) parameter-based, (2) optimization-based, and (3) methodology-based. In the entire category, several techniques were identified, and the concept, parameters, advantages, and disadvantages are elaborated. Based on this attempt, we provide useful information to the audience to be used while they investigate their research ideas and to develop a novel model in order to overcome the drawbacks that are present in the WSN-based clustering models.
Publisher: Elsevier BV
Date: 10-2022
DOI: 10.1016/J.COMPBIOMED.2022.105943
Abstract: The task of classification and localization with detecting abnormalities in medical images is considered very challenging. Computer-aided systems have been widely employed to address this issue, and the proliferation of deep learning network architectures is proof of the outstanding performance reported in the literature. However, localizing abnormalities in regions of images that can support the confidence of classification continues to attract research interest. The difficulty of using digital histopathology images for this task is another drawback, which needs high-level deep learning models to address the situation. Successful pathology localization automation will support automatic acquisition planning and post-imaging analysis. In this paper, we address issues related to the combination of classification with image localization and detection through a dual branch deep learning framework that uses two different configurations of convolutional neural networks (CNN) architectures. Whole-image based CNN (WCNN) and region-based CNN (RCNN) architectures are systematically combined to classify and localize abnormalities in s les. A multi-class classification and localization of abnormalities are achieved using the method with no annotation-dependent images. In addition, seamless confidence and explanation mechanism is provided so that outcomes from WCNN and RCNN are mapped together for further analysis. Using images from both BACH and BreakHis databases, an exhaustive set of experiments was carried out to validate the performance of the proposed method in achieving classification and localization simultaneously. Obtained results showed that the system achieved a classification accuracy of 97.08%, a localization accuracy of 94%, and an area under the curve (AUC) of 0.10 for classification. Further findings from this study revealed that a multi-neural network approach could provide a suitable method for addressing the combinatorial problem of classification and localization anomalies in digital medical images. Lastly, the study's outcome offers means for automating the annotation of histopathology images and the support for human pathologists in locating abnormalities.
Publisher: MDPI AG
Date: 14-01-2022
DOI: 10.3390/EN15020578
Abstract: Nowadays, learning-based modeling methods are utilized to build a precise forecast model for renewable power sources. Computational Intelligence (CI) techniques have been recognized as effective methods in generating and optimizing renewable tools. The complexity of this variety of energy depends on its coverage of large sizes of data and parameters, which have to be investigated thoroughly. This paper covered the most resent and important researchers in the domain of renewable problems using the learning-based methods. Various types of Deep Learning (DL) and Machine Learning (ML) algorithms employed in Solar and Wind energy supplies are given. The performance of the given methods in the literature is assessed by a new taxonomy. This paper focus on conducting comprehensive state-of-the-art methods heading to performance evaluation of the given techniques and discusses vital difficulties and possibilities for extensive research. Based on the results, variations in efficiency, robustness, accuracy values, and generalization capability are the most obvious difficulties for using the learning techniques. In the case of the big dataset, the effectiveness of the learning techniques is significantly better than the other computational methods. However, applying and producing hybrid learning techniques with other optimization methods to develop and optimize the construction of the techniques is optionally indicated. In all cases, hybrid learning methods have better achievement than a single method due to the fact that hybrid methods gain the benefit of two or more techniques for providing an accurate forecast. Therefore, it is suggested to utilize hybrid learning techniques in the future to deal with energy generation problems.
Publisher: IEEE
Date: 04-12-2022
Publisher: Springer Science and Business Media LLC
Date: 12-01-2022
Publisher: Center for Open Science
Date: 15-10-2020
Abstract: This paper provides a state-of-the-art investigation of advances in data science in emerging economic applications. The analysis was performed on novel data science methods in four in idual classes of deep learning models, hybrid deep learning models, hybrid machine learning, and ensemble models. Application domains include a wide and erse range of economics research from the stock market, marketing, and e-commerce to corporate banking and cryptocurrency. Prisma method, a systematic literature review methodology, was used to ensure the quality of the survey. The findings reveal that the trends follow the advancement of hybrid models, which, based on the accuracy metric, outperform other learning algorithms. It is further expected that the trends will converge toward the advancements of sophisticated hybrid deep learning models.
Publisher: IEEE
Date: 28-06-2021
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 02-2022
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2019
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 15-03-2021
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 03-2018
Publisher: Springer Science and Business Media LLC
Date: 23-11-2020
Publisher: Elsevier BV
Date: 07-2022
Publisher: MDPI AG
Date: 28-11-2022
DOI: 10.3390/ELECTRONICS11233930
Abstract: Design research topics attract exponentially more attention and consideration among researchers. This study is the first research article that endeavors to analyze selected design research publications using an advanced approach called “text mining”. This approach speculates its results depending on the existence of a research term (i.e., keywords), which can be more robust than other methods/approaches that rely on contextual data or authors’ perspectives. The main aim of this research paper is to expand knowledge and familiarity with design research and explore future research directions by addressing the gaps in the literature relying on the literature review, it can be stated that the research area in the design domain still not built-up a theory, which can unify the field. In general, text mining with these features allows increased validity and generalization as compared to other approaches in the literature. We used a text mining technique to collect data and analyzed 3553 articles collected in 10 journals using 17,487 keywords. New topics were investigated in the domain of design concepts, which included attracting researchers, practitioners, and journal editorial boards. Such issues as co-innovation, ethical design, social practice design, conceptual thinking, collaborative design, creativity, and generative methods and tools were subject to additional research. On the other hand, researchers pursued topics such as collaborative design, human-centered design, interdisciplinary design, design education, participatory design, design practice, collaborative design, design development, collaboration, design theories, design administration, and service roduct design areas. The key categories investigated and reported in this paper helped in determining what fields are flourishing and what fields are eroding.
Publisher: Elsevier BV
Date: 07-2023
Publisher: Elsevier
Date: 2017
Publisher: Institution of Engineering and Technology (IET)
Date: 07-11-2022
DOI: 10.1049/ITR2.12132
Publisher: Elsevier BV
Date: 09-2021
Publisher: Wiley
Date: 17-04-2020
Publisher: MDPI AG
Date: 15-02-2023
DOI: 10.3390/ELECTRONICS12040957
Abstract: Data analytics using artificial intelligence is the process of leveraging advanced AI techniques to extract insights and knowledge from large and complex datasets [...]
Publisher: Elsevier BV
Date: 2012
Publisher: ASME International
Date: 19-12-2022
DOI: 10.1115/1.4056396
Publisher: MDPI AG
Date: 02-07-2021
DOI: 10.3390/PR9071155
Abstract: One of the most crucial aspects of image segmentation is multilevel thresholding. However, multilevel thresholding becomes increasingly more computationally complex as the number of thresholds grows. In order to address this defect, this paper proposes a new multilevel thresholding approach based on the Evolutionary Arithmetic Optimization Algorithm (AOA). The arithmetic operators in science were the inspiration for AOA. DAOA is the proposed approach, which employs the Differential Evolution technique to enhance the AOA local research. The proposed algorithm is applied to the multilevel thresholding problem, using Kapur’s measure between class variance functions. The suggested DAOA is used to evaluate images, using eight standard test images from two different groups: nature and CT COVID-19 images. Peak signal-to-noise ratio (PSNR) and structural similarity index test (SSIM) are standard evaluation measures used to determine the accuracy of segmented images. The proposed DAOA method’s efficiency is evaluated and compared to other multilevel thresholding methods. The findings are presented with a number of different threshold values (i.e., 2, 3, 4, 5, and 6). According to the experimental results, the proposed DAOA process is better and produces higher-quality solutions than other comparative approaches. Moreover, it achieved better-segmented images, PSNR, and SSIM values. In addition, the proposed DAOA is ranked the first method in all test cases.
Publisher: Wiley
Date: 17-04-2020
Publisher: Vilnius Gediminas Technical University
Date: 07-2016
DOI: 10.3846/13923730.2015.1068844
Abstract: Build-Operate-Transfer (BOT) contracts have been widely implemented in developing countries facing budget constraints. Analysing the expected variability in project viability requires extensive risk analysis. An objective analysis of various risk variables and their influence on a BOT project evaluation requires study and integration of many scenarios into the concession terms, which is complicated and time-consuming. If the process of negotiating the financial parameters and uncertainties of a BOT project could be automated, this would be a milestone in objective decision-making from various stakeholders’ points of view. A soft computing model would let the user incorporate as many scenarios as could be provided. Extensive risk analysis could then be easily performed, leading to more accurate and dependable results. In this research, an artificial neural network model with correlation coefficient of 0.9064 has been used to model the relationship between important project parameters and risk variables. This information was extracted from sensitivity analysis and Monte Carlo simulation results obtained from conventional spreadsheet data. The resulting consensus would yield to fair contractual agreements for both the government and the concession company.
Publisher: Springer Nature Switzerland
Date: 2023
Publisher: Elsevier BV
Date: 03-2021
Publisher: Elsevier BV
Date: 02-2022
Publisher: Springer International Publishing
Date: 2022
Publisher: Elsevier BV
Date: 12-2015
Publisher: Springer Science and Business Media LLC
Date: 08-07-2023
DOI: 10.1038/S41598-023-37540-Z
Abstract: The considerable improvement of technology produced for various applications has resulted in a growth in data sizes, such as healthcare data, which is renowned for having a large number of variables and data s les. Artificial neural networks (ANN) have demonstrated adaptability and effectiveness in classification, regression, and function approximation tasks. ANN is used extensively in function approximation, prediction, and classification. Irrespective of the task, ANN learns from the data by adjusting the edge weights to minimize the error between the actual and predicted values. Back Propagation is the most frequent learning technique that is used to learn the weights of ANN. However, this approach is prone to the problem of sluggish convergence, which is especially problematic in the case of Big Data. In this paper, we propose a Distributed Genetic Algorithm based ANN Learning Algorithm for addressing challenges associated with ANN learning for Big data. Genetic Algorithm is one of the well-utilized bio-inspired combinatorial optimization methods. Also, it is possible to parallelize it at multiple stages, and this may be done in an extremely effective manner for the distributed learning process. The proposed model is tested with various datasets to evaluate its realizability and efficiency. The results obtained from the experiments show that after a specific volume of data, the proposed learning method outperformed the traditional methods in terms of convergence time and accuracy. The proposed model outperformed the traditional model by almost 80% improvement in computational time.
Publisher: Springer Science and Business Media LLC
Date: 26-10-2022
Publisher: Springer Science and Business Media LLC
Date: 16-01-2022
Publisher: IEEE
Date: 12-2020
Publisher: Springer International Publishing
Date: 2022
Publisher: Springer Science and Business Media LLC
Date: 09-2021
Publisher: Wiley
Date: 22-06-2021
DOI: 10.1002/ENG2.12405
Abstract: The interior search algorithm (ISA) is an optimization algorithm inspired by esthetic techniques used for interior design and decoration. The algorithm has only one parameter, controlled by θ , and uses an evolutionary boundary constraint handling (BCH) strategy to keep itself within an admissible solution space while approaching the optimum. We apply the ISA to find optimal weight design of truss structures with frequency constraints. Sensitivity of the ISA's performance to the θ parameter and the BCH strategy is investigated by considering different values of θ and BCH techniques. This is the first study in the literature on the sensitivity of truss optimization problems to various BCH approaches. Moreover, we also study the impact of different BCH methods on ersification and intensification. Through extensive numerical simulations, we identified the best BCH methods that provide consistently better results for all truss problems studied, and obtained a range of θ that maximizes the ISA's performance for all problem classes studied. However, results also recommend further fine‐tuning of parameter settings for specific case studies to obtain the best results.
Publisher: Springer Science and Business Media LLC
Date: 27-05-2019
Publisher: Elsevier BV
Date: 11-2015
Publisher: Emerald
Date: 07-07-2022
Abstract: Engineering design and operational decisions depend largely on deep understanding of applications that requires assumptions for simplification of the problems in order to find proper solutions. Cutting-edge machine learning algorithms can be used as one of the emerging tools to simplify this process. In this paper, we propose a novel scalable and interpretable machine learning framework to automate this process and fill the current gap. The essential principles of the proposed pipeline are mainly (1) scalability, (2) interpretibility and (3) robust probabilistic performance across engineering problems. The lack of interpretibility of complex machine learning models prevents their use in various problems including engineering computation assessments. Many consumers of machine learning models would not trust the results if they cannot understand the method. Thus, the SHapley Additive exPlanations (SHAP) approach is employed to interpret the developed machine learning models. The proposed framework can be applied to a variety of engineering problems including seismic damage assessment of structures. The performance of the proposed framework is investigated using two case studies of failure identification in reinforcement concrete (RC) columns and shear walls. In addition, the reproducibility, reliability and generalizability of the results were validated and the results of the framework were compared to the benchmark studies. The results of the proposed framework outperformed the benchmark results with high statistical significance. Although, the current study reveals that the geometric input features and reinforcement indices are the most important variables in failure modes detection, better model can be achieved with employing more robust strategies to establish proper database to decrease the errors in some of the failure modes identification.
Publisher: Elsevier
Date: 2023
Publisher: Elsevier BV
Date: 10-2023
Publisher: Elsevier BV
Date: 09-2023
Publisher: IEEE
Date: 05-12-2021
Publisher: Elsevier BV
Date: 12-2013
Publisher: Springer Science and Business Media LLC
Date: 12-12-2012
Publisher: CRC Press
Date: 19-07-2022
Publisher: CRC Press
Date: 19-07-2022
Publisher: Elsevier BV
Date: 08-2014
Publisher: Informa UK Limited
Date: 06-03-2015
Publisher: Elsevier BV
Date: 04-2022
Publisher: Elsevier
Date: 2022
Publisher: CRC Press
Date: 19-07-2022
Publisher: Elsevier BV
Date: 10-2020
Publisher: IEEE
Date: 05-12-2021
Publisher: Springer International Publishing
Date: 2015
Publisher: Elsevier BV
Date: 12-2019
Publisher: MDPI AG
Date: 30-01-2022
DOI: 10.3390/ELECTRONICS11030421
Abstract: Big data analytics is one high focus of data science and there is no doubt that big data is now quickly growing in all science and engineering fields [...]
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 06-2021
Publisher: Springer International Publishing
Date: 2015
Publisher: MDPI AG
Date: 16-10-2020
DOI: 10.3390/MATH8101799
Abstract: This paper provides a comprehensive state-of-the-art investigation of the recent advances in data science in emerging economic applications. The analysis is performed on the novel data science methods in four in idual classes of deep learning models, hybrid deep learning models, hybrid machine learning, and ensemble models. Application domains include a broad and erse range of economics research from the stock market, marketing, and e-commerce to corporate banking and cryptocurrency. Prisma method, a systematic literature review methodology, is used to ensure the quality of the survey. The findings reveal that the trends follow the advancement of hybrid models, which outperform other learning algorithms. It is further expected that the trends will converge toward the evolution of sophisticated hybrid deep learning models.
Publisher: Elsevier BV
Date: 11-2021
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 15-11-2021
Publisher: Elsevier BV
Date: 09-2023
Publisher: Elsevier
Date: 2021
Publisher: Elsevier
Date: 2013
Publisher: Springer International Publishing
Date: 2015
Publisher: Elsevier BV
Date: 10-2022
Publisher: Springer International Publishing
Date: 2015
Publisher: Elsevier BV
Date: 10-2016
Publisher: Springer Science and Business Media LLC
Date: 10-2013
Publisher: Elsevier BV
Date: 04-2021
Publisher: IEEE
Date: 28-06-2021
Publisher: Springer Singapore
Date: 2021
Publisher: arXiv
Date: 2022
Publisher: American Society of Civil Engineers (ASCE)
Date: 11-2020
Publisher: Emerald
Date: 30-09-2014
Abstract: – Meta-heuristic algorithms are efficient in achieving the optimal solution for engineering problems. Hybridization of different algorithms may enhance the quality of the solutions and improve the efficiency of the algorithms. The purpose of this paper is to propose a novel, robust hybrid meta-heuristic optimization approach by adding differential evolution (DE) mutation operator to the accelerated particle swarm optimization (APSO) algorithm to solve numerical optimization problems. – The improvement includes the addition of DE mutation operator to the APSO updating equations so as to speed up convergence. – A new optimization method is proposed by introducing DE-type mutation into APSO, and the hybrid algorithm is called differential evolution accelerated particle swarm optimization (DPSO). The difference between DPSO and APSO is that the mutation operator is employed to fine-tune the newly generated solution for each particle, rather than random walks used in APSO. – A novel hybrid method is proposed and used to optimize 51 functions. It is compared with other methods to show its effectiveness. The effect of the DPSO parameters on convergence and performance is also studied and analyzed by detailed parameter sensitivity studies.
Publisher: Elsevier BV
Date: 09-2010
Publisher: Elsevier BV
Date: 08-2018
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 07-2023
Publisher: Elsevier BV
Date: 11-2011
Publisher: Elsevier BV
Date: 07-2022
Publisher: ACM
Date: 09-07-2017
Publisher: Elsevier BV
Date: 02-2022
DOI: 10.1016/J.CMPB.2021.106432
Abstract: Breast cancer is the most commonly occurring cancer among women, which contributes to the global death rate. The key to increasing the survival rate of affected patients is early diagnosis along with appropriate treatments. Manual methods for breast cancer diagnosis fail due to human errors, inaccurate diagnoses, and are time-consuming when demands are high. Intelligent systems based on Artificial Neural Network (ANN) for automated breast cancer diagnosis are powerful due to their strong decision-making capabilities in complicated cases. Artificial Bee Colony, Artificial Immune System, and Bacterial Foraging Optimization are swarm intelligence algorithms that solve combinatorial optimization problems. This paper proposes two novel hybrid Artificial Bee Colony (ABC) optimization algorithms that overcome the demerits of standard ABC algorithms. First, this paper proposes a hybrid ABC approach called HABC, in which the standard ABC optimization is hybridized with a modified clonal selection algorithm of the Artificial Immune System that eliminates the poor exploration capabilities of standard ABC optimization. Further, this paper proposes a novel hybrid Artificial Bee Colony (Hybrid ABC) optimization where the strong explorative capabilities of the chemotaxis phase of the bacterial foraging optimization are integrated with a spiral model-based exploitative phase of the ABC by which the proposed Hybrid ABC overcomes the demerits of poor exploration and exploitation of the standard ABC algorithm. In this work, the two proposed hybrid approaches were used in concurrent feature selection and parameter optimization of an ANN model. The proposed algorithm is implemented using various back-propagation algorithms, including resilient back-propagation (HABC-RP and Hybrid ABC-RP), Levenberg Marquart (HABC-LM and Hybrid ABC-LM), and momentum-based gradient descent (HABC-MGD and Hybrid ABC-GD) for parameter tuning of ANN. The Wisconsin breast cancer dataset was used to evaluate the performance of the proposed algorithms in terms of accuracy, complexity, and computational time. The mean accuracy of the proposed HABC-RP was 99.14% and 99.54% for Hybrid ABC which is better than the results found in the existing literature. HABC-RP attained a sensitivity of 98.32%, a specificity of 99.63%, and a precision of 99.38% whereas Hybrid ABC attained sensitivity of 99.08% and Specificity of 99.81%. HABC-RP and Hybrid ABC-RP yielded high accuracy with a low complexity ANN structure compared to other variants. After evaluation, interestingly it is found that the Hybrid ABC-RP has achieved the highest mean accuracy of 99.54% with low complexity of 10.25 mean connections when compared to other variants proposed in this paper. It can be concluded that the concurrent selection of input features and tuning of parameters of ANN plays a vital role in increasing the accuracy of a breast cancer diagnosis. The proposed HABC-RP and Hybrid ABC-RP showed better results when compared to the existing breast cancer diagnosis systems taken for comparison. In the future, the proposed two-hybrid approaches can be used to generate optimal thresholds for the segmentation of tumors in abnormal images. HABC and Hybrid ABC can be used for tuning the parameters of various classifiers.
Publisher: SPIE
Date: 22-04-2014
DOI: 10.1117/12.2557080
Publisher: SPIE
Date: 22-03-2021
DOI: 10.1117/12.2582649
Publisher: Elsevier BV
Date: 12-2022
Publisher: MDPI AG
Date: 13-05-2021
DOI: 10.3390/PR9050859
Abstract: A new algorithm, Material Generation Algorithm (MGA), was developed and applied for the optimum design of engineering problems. Some advanced and basic aspects of material chemistry, specifically the configuration of chemical compounds and chemical reactions in producing new materials, are determined as inspirational concepts of the MGA. For numerical investigations purposes, 10 constrained optimization problems in different dimensions of 10, 30, 50, and 100, which have been benchmarked by the Competitions on Evolutionary Computation (CEC), are selected as test ex les while 15 of the well-known engineering design problems are also determined to evaluate the overall performance of the proposed method. The best results of different classical and new metaheuristic optimization algorithms in dealing with the selected problems were taken from the recent literature for comparison with MGA. Additionally, the statistical values of the MGA algorithm, consisting of the mean, worst, and standard deviation, were calculated and compared to the results of other metaheuristic algorithms. Overall, this work demonstrates that the proposed MGA is able provide very competitive, and even outstanding, results and mostly outperforms other metaheuristics.
Publisher: Informa UK Limited
Date: 10-04-2020
Publisher: Springer Science and Business Media LLC
Date: 13-08-2022
Publisher: Springer Science and Business Media LLC
Date: 29-05-2018
Publisher: ACM
Date: 22-07-2018
Publisher: Elsevier
Date: 2022
Publisher: Center for Open Science
Date: 16-10-2020
Abstract: This paper provides a state-of-the-art investigation of advances in data science in emerging economic applications. The analysis was performed on novel data science methods in four in idual classes of deep learning models, hybrid deep learning models, hybrid machine learning, and ensemble models. Application domains include a wide and erse range of economics research from the stock market, marketing, and e-commerce to corporate banking and cryptocurrency. Prisma method, a systematic literature review methodology, was used to ensure the quality of the survey. The findings reveal that the trends follow the advancement of hybrid models, which, based on the accuracy metric, outperform other learning algorithms. It is further expected that the trends will converge toward the advancements of sophisticated hybrid deep learning models.
Publisher: Springer Science and Business Media LLC
Date: 25-09-2012
Publisher: Springer International Publishing
Date: 2020
Publisher: Elsevier BV
Date: 02-2023
Publisher: Hindawi Limited
Date: 2013
DOI: 10.1155/2013/213853
Abstract: Recently, Gandomi and Alavi proposed a novel swarm intelligent method, called krill herd (KH), for global optimization. To enhance the performance of the KH method, in this paper, a new improved meta-heuristic simulated annealing-based krill herd (SKH) method is proposed for optimization tasks. A new krill selecting (KS) operator is used to refine krill behavior when updating krill’s position so as to enhance its reliability and robustness dealing with optimization problems. The introduced KS operator involves greedy strategy and accepting few not-so-good solutions with a low probability originally used in simulated annealing (SA). In addition, a kind of elitism scheme is used to save the best in iduals in the population in the process of the krill updating. The merits of these improvements are verified by fourteen standard benchmarking functions and experimental results show that, in most cases, the performance of this improved meta-heuristic SKH method is superior to, or at least highly competitive with, the standard KH and other optimization methods.
Publisher: Springer Science and Business Media LLC
Date: 16-01-2023
Publisher: Wiley
Date: 15-11-2022
Publisher: Association for Computing Machinery (ACM)
Date: 15-02-2022
DOI: 10.1145/3488247
Abstract: Osmotic computing in association with related computing paradigms (cloud, fog, and edge) emerges as a promising solution for handling bulk of security-critical as well as latency-sensitive data generated by the digital devices. It is a growing research domain that studies deployment, migration, and optimization of applications in the form of microservices across cloud/edge infrastructure. It presents dynamically tailored microservices in technology-centric environments by exploiting edge and cloud platforms. Osmotic computing promotes digital transformation and furnishes benefits to transportation, smart cities, education, and healthcare. In this article, we present a comprehensive analysis of osmotic computing through a systematic literature review approach. To ensure high-quality review, we conduct an advanced search on numerous digital libraries to extracting related studies. The advanced search strategy identifies 99 studies, from which 29 relevant studies are selected for a thorough review. We present a summary of applications in osmotic computing build on their key features. On the basis of the observations, we outline the research challenges for the applications in this research field. Finally, we discuss the security issues resolved and unresolved in osmotic computing.
Publisher: CRC Press
Date: 20-12-2020
Publisher: Emerald
Date: 24-06-2013
Abstract: – To improve the performance of the krill herd (KH) algorithm, in this paper, a series of chaotic particle-swarm krill herd (CPKH) algorithms are proposed for solving optimization tasks within limited time requirements. The paper aims to discuss these issues. – In CPKH, chaos sequence is introduced into the KH algorithm so as to further enhance its global search ability. – This new method can accelerate the global convergence speed while preserving the strong robustness of the basic KH. – Here, 32 different benchmarks and a gear train design problem are applied to tune the three main movements of the krill in CPKH method. It has been demonstrated that, in most cases, CPKH with an appropriate chaotic map performs superiorly to, or at least highly competitively with, the standard KH and other population-based optimization methods.
Publisher: Springer Science and Business Media LLC
Date: 06-07-2022
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 11-2020
Publisher: Elsevier BV
Date: 2017
Publisher: Elsevier BV
Date: 12-2011
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 10-2022
Publisher: Springer Science and Business Media LLC
Date: 12-07-2012
Publisher: Elsevier BV
Date: 09-2020
Publisher: Hindawi Limited
Date: 2013
DOI: 10.1155/2013/346285
Abstract: In the current study, the performances of some decision tree (DT) techniques are evaluated for postearthquake soil liquefaction assessment. A database containing 620 records of seismic parameters and soil properties is used in this study. Three decision tree techniques are used here in two different ways, considering statistical and engineering points of view, to develop decision rules. The DT results are compared to the logistic regression (LR) model. The results of this study indicate that the DTs not only successfully predict liquefaction but they can also outperform the LR model. The best DT models are interpreted and evaluated based on an engineering point of view.
Publisher: Elsevier BV
Date: 03-2012
Publisher: Elsevier BV
Date: 2016
Publisher: Inderscience Publishers
Date: 2016
Publisher: Springer Science and Business Media LLC
Date: 25-01-2019
Publisher: Elsevier BV
Date: 09-2022
Publisher: Springer International Publishing
Date: 2021
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 03-2023
Publisher: Elsevier BV
Date: 09-2023
Publisher: Springer Science and Business Media LLC
Date: 06-2010
Publisher: Elsevier
Date: 2013
Publisher: MDPI AG
Date: 06-09-2022
DOI: 10.3390/ELECTRONICS11182801
Abstract: This work offers an overview of the effective communication techniques for space exploration of ground, aerial, and underwater vehicles. We not only comprehensively summarize the trajectory planning, space exploration, optimization, and other challenges encountered but also present the possible directions for future work. Because a detailed study like this is uncommon in the literature, an attempt has been made to fill the gap for readers interested in path planning. This paper also includes optimization strategies that can be used to implement terrestrial, underwater, and airborne applications. This study addresses numerical, bio-inspired, and hybrid methodologies for each dimension described. Throughout this study, we endeavored to establish a centralized platform in which a wealth of research on autonomous vehicles (on the land and their trajectory optimizations), airborne vehicles, and underwater vehicles, is published.
Publisher: Elsevier BV
Date: 05-2023
Publisher: Elsevier BV
Date: 07-2012
Publisher: Elsevier BV
Date: 02-2018
Publisher: Elsevier
Date: 2021
Publisher: AIP Publishing
Date: 2022
DOI: 10.1063/5.0079791
Abstract: Dam-break wave propagation in a debris flood event is strongly influenced by accumulated reservoir-bound sediment and downstream obstacles. For instance, the Brumadinho dam disaster in Brazil in 2019 released 12 × 106 m3 of mud and iron tailings and inflicted 270 casualties. The present work was motivated by the apparent lack of experimental or numerical studies on silted-up reservoir dam-breaks with downstream semi-circular obstacles. Accordingly, 24 dam-break scenarios with different reservoir sediment depths and with or without obstacles were observed experimentally and verified numerically. Multiphase flood waves were filmed, and sediment depths, water levels, and values of front wave celerity were measured to improve our scientific understanding of shock wave propagation over an abruptly changing topography. Original data generated in this study is available online in the public repository and may be used for practical purposes. The strength of OpenFOAM software in estimating such a complex phenomenon was assessed using two approaches: volume of fluid (VOF) and Eulerian. An acceptable agreement was attained between numerical and experimental records (errors ranged from 1 to 13.6%), with the Eulerian outperforming the VOF method in estimating both sediment depth and water level profiles. This difference was most notable when more than half of the reservoir depth was initially filled by sediment (≥0.15 m), particularly in bumpy bed scenarios.
Publisher: Center for Open Science
Date: 15-10-2020
Abstract: This paper provides a state-of-the-art investigation of advances in data science in emerging economic applications. The analysis was performed on novel data science methods in four in idual classes of deep learning models, hybrid deep learning models, hybrid machine learning, and ensemble models. Application domains include a wide and erse range of economics research from the stock market, marketing, and e-commerce to corporate banking and cryptocurrency. Prisma method, a systematic literature review methodology, was used to ensure the quality of the survey. The findings reveal that the trends follow the advancement of hybrid models, which, based on the accuracy metric, outperform other learning algorithms. It is further expected that the trends will converge toward the advancements of sophisticated hybrid deep learning models.
Publisher: Research Square Platform LLC
Date: 13-10-2020
DOI: 10.21203/RS.3.RS-85513/V1
Abstract: The novel Coronavirus disease, known as COVID-19, is an outbreak that started in Wuhan, one of the Central Chinese cities. In this report, a short analysis focusing on Australia, Italy, and the United Kingdom has been conducted. The analysis includes confirmed and recovered cases and deaths, the growth rate in Australia as compared with Italy and the United Kingdom, and the outbreak in different Australian cities. Mathematical approaches based on the susceptible, infected, and recovered case (SIR) and susceptible, exposed, infected, and recovered (SEIR) models were proposed to predict the epidemiology in the countries. Since the performance of the classic form of SIR and SEIR depends on parameter settings, some optimization algorithms, namely, the Broyden–Fletcher–Goldfarb–Shanno (BFGS), conjugate gradients (CG), L-BFGS-B, and Nelder-Mead are proposed to optimize the parameters of SIR and SEIR models and improve its predictive capabilities. The results of optimized SIR and SEIR models are compared with the Prophet algorithm and logistic function as two known ML algorithms. The results show that different algorithms display different behaviours in different countries. However, the improved version of the SIR and SEIR models have a better performance compared with other mentioned algorithms described in this study. Moreover, the Prophet algorithm works better for Italy and the United Kingdom cases than for Australian cases and Logistic function compared with Prophet algorithm has a better performance in these cases. It seems that Prophet algorithm is suitable for data with increasing trend in pandemic situations. Optimization of the SIR and SEIR models parameters has yielded a significant improvement in the prediction accuracy of the models. Although there are several algorithms for prediction of this Pandemic, there is no certain algorithm that would be the best one for all cases.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2023
Publisher: Springer Science and Business Media LLC
Date: 30-11-2020
Publisher: Elsevier BV
Date: 11-2022
Publisher: Informa UK Limited
Date: 25-08-2021
Publisher: Elsevier BV
Date: 07-2023
Publisher: Elsevier
Date: 2023
Publisher: Springer Science and Business Media LLC
Date: 25-09-2020
Publisher: arXiv
Date: 2022
Publisher: Elsevier BV
Date: 10-2022
Publisher: Elsevier BV
Date: 06-2023
Publisher: Elsevier BV
Date: 11-2021
Publisher: Springer Science and Business Media LLC
Date: 30-01-2022
Publisher: Research Square Platform LLC
Date: 28-07-2022
DOI: 10.21203/RS.3.RS-1884556/V1
Abstract: Nowadays, human activity recognition (HAR) is a key component of many ubiquitous innovative solutions, where both accelerometer and gyroscope data provide information about an observed person's physical activity. HAR offers a erse variety of important applications, including healthcare, burglary detection, workplace monitoring, and emergency detection. In this paper, we propose a custom 1D-CNN deep learning approach named WISNet to recognize complex human activity. The proposed model is compared with other time-series deep learning models, which include Gated Recurrent Units (GRU), Long Short-Term Memory (LSTM), and recurrent neural network (SimpleRNN), to classify and detect the six class WISDM dataset. This dataset includes basic ambulation-related activities, like walking, jogging, and climbing stairs having been retrieved from sensors in smartphones and smartwatches. The proposed WISNet architecture outperforms the other time series models in detecting the six classes GRU, LSTM, and SimpleRNN attain an accuracy of 95.27%, 95.15%, and 91.8%, respectively. For HAR, the proposed trained WISNet model achieves enhanced an accuracy and F1-score of 96.41% and 0.95, respectively, surpassing the other models. The proposed WISNet architecture consists of a smaller number of convolutions, generating almost less than 10% features when compared to the GRU, LSTM, and SimpleRNN, presenting also better results.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2020
Publisher: SPIE
Date: 19-04-2022
DOI: 10.1117/12.2630797
Publisher: Elsevier
Date: 2013
Publisher: Center for Open Science
Date: 16-10-2020
Abstract: This paper provides a state-of-the-art investigation of advances in data science in emerging economic applications. The analysis was performed on novel data science methods in four in idual classes of deep learning models, hybrid deep learning models, hybrid machine learning, and ensemble models. Application domains include a wide and erse range of economics research from the stock market, marketing, and e-commerce to corporate banking and cryptocurrency. Prisma method, a systematic literature review methodology, was used to ensure the quality of the survey. The findings reveal that the trends follow the advancement of hybrid models, which, based on the accuracy metric, outperform other learning algorithms. It is further expected that the trends will converge toward the advancements of sophisticated hybrid deep learning models.
Publisher: Elsevier BV
Date: 03-2011
Publisher: Periodica Polytechnica Budapest University of Technology and Economics
Date: 30-05-2017
DOI: 10.3311/PPCI.10548
Abstract: Building collapse in earthquakes caused huge losses, both in human and economic terms. To assess the risk posed by using the composite members, this paper investigates seismic failure probability and vulnerability assessment of steel-concrete composite structures constituted by rectangular concrete filled steel tube (RCFT) columns and steel beams. To enable numerical simulation of RCFT-structure, the details of components modeling are developed using OpenSEES finite element analysis package and the validation of proposed procedure is investigated through comparisons with available experimental results. The seismic fragility and vulnerability curves of RCFT-structures are created through nonlinear dynamic analysis using an appropriate suite of ground motions for seismic loss assessment. These curves developed for three-, six- and nine-story prototypes of RCFT-structure. Fragility curves are an appropriate tool for representing the seismic failure probabilities and vulnerability curves demonstrate a probability of exceeding loss to a measure of ground motion intensity.
Publisher: American Society of Civil Engineers (ASCE)
Date: 06-2021
Publisher: Elsevier BV
Date: 10-2023
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 03-2022
Publisher: Elsevier BV
Date: 09-2023
Publisher: Springer Science and Business Media LLC
Date: 27-06-2021
Publisher: American Society of Civil Engineers
Date: 02-04-2014
Publisher: Elsevier BV
Date: 08-2014
Publisher: IEEE
Date: 09-2014
DOI: 10.1109/ISCMI.2014.8
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 06-2020
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2022
Publisher: Emerald
Date: 05-04-2011
DOI: 10.1108/02644401111118132
Abstract: The complexity of analysis of geotechnical behavior is due to multivariable dependencies of soil and rock responses. In order to cope with this complex behavior, traditional forms of engineering design solutions are reasonably simplified. Incorporating simplifying assumptions into the development of the traditional models may lead to very large errors. The purpose of this paper is to illustrate capabilities of promising variants of genetic programming (GP), namely linear genetic programming (LGP), gene expression programming (GEP), and multi‐expression programming (MEP) by applying them to the formulation of several complex geotechnical engineering problems. LGP, GEP, and MEP are new variants of GP that make a clear distinction between the genotype and the phenotype of an in idual. Compared with the traditional GP, the LGP, GEP, and MEP techniques are more compatible with computer architectures. This results in a significant speedup in their execution. These methods have a great ability to directly capture the knowledge contained in the experimental data without making assumptions about the underlying rules governing the system. This is one of their major advantages over most of the traditional constitutive modeling methods. In order to demonstrate the simulation capabilities of LGP, GEP, and MEP, they were applied to the prediction of: relative crest settlement of concrete‐faced rockfill dams slope stability settlement around tunnels and soil liquefaction. The results are compared with those obtained by other models presented in the literature and found to be more accurate. LGP has the best overall behavior for the analysis of the considered problems in comparison with GEP and MEP. The simple and straightforward constitutive models developed using LGP, GEP and MEP provide valuable analysis tools accessible to practicing engineers. The LGP, GEP, and MEP approaches overcome the shortcomings of different methods previously presented in the literature for the analysis of geotechnical engineering systems. Contrary to artificial neural networks and many other soft computing tools, LGP, GEP, and MEP provide prediction equations that can readily be used for routine design practice. The constitutive models derived using these methods can efficiently be incorporated into the finite element or finite difference analyses as material models. They may also be used as a quick check on solutions developed by more time consuming and in‐depth deterministic analyses.
Publisher: MDPI AG
Date: 19-10-2022
DOI: 10.3390/EN15207734
Abstract: Ocean energy is one potential renewable energy alternative to fossil fuels that has a more significant power generation due to its better predictability and availability. In order to harness this source, wave energy converters (WECs) have been devised and used over the past several years to generate as much energy and power as is feasible. While it is possible to install these devices in both nearshore and offshore areas, nearshore sites are more appropriate places since more severe weather occurs offshore. Determining the optimal location might be challenging when dealing with sites along the coast since they often have varying capacities for energy production. Constructing wave farms requires determining the appropriate location for WECs, which may lead us to its correct and optimum design. The WEC size, shape, and layout are factors that must be considered for installing these devices. Therefore, this review aims to explain the methodologies, advancements, and effective hydrodynamic parameters that may be used to discover the optimal configuration of WECs in nearshore locations using evolutionary algorithms (EAs).
Publisher: Elsevier BV
Date: 11-2019
Publisher: Elsevier
Date: 2017
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 15-08-2023
Publisher: Elsevier
Date: 2013
Publisher: Wiley
Date: 05-06-2014
DOI: 10.1002/NAG.2308
Publisher: Elsevier BV
Date: 02-2020
Publisher: Springer International Publishing
Date: 2021
Publisher: Wiley
Date: 17-02-2017
DOI: 10.1002/NAG.2678
Publisher: Wiley
Date: 13-07-2016
DOI: 10.1002/NAG.2554
Publisher: Elsevier BV
Date: 03-2012
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 15-08-2021
Publisher: Center for Open Science
Date: 16-10-2020
Abstract: This paper provides a state-of-the-art investigation of advances in data science in emerging economic applications. The analysis was performed on novel data science methods in four in idual classes of deep learning models, hybrid deep learning models, hybrid machine learning, and ensemble models. Application domains include a wide and erse range of economics research from the stock market, marketing, and e-commerce to corporate banking and cryptocurrency. Prisma method, a systematic literature review methodology, was used to ensure the quality of the survey. The findings reveal that the trends follow the advancement of hybrid models, which, based on the accuracy metric, outperform other learning algorithms. It is further expected that the trends will converge toward the advancements of sophisticated hybrid deep learning models.
Publisher: MDPI AG
Date: 27-12-2021
DOI: 10.3390/S22010153
Abstract: This study presents a comprehensive review of the history of research and development of different damage-detection methods in the realm of composite structures. Different fields of engineering, such as mechanical, architectural, civil, and aerospace engineering, benefit excellent mechanical properties of composite materials. Due to their heterogeneous nature, composite materials can suffer from several complex nonlinear damage modes, including impact damage, delamination, matrix crack, fiber breakage, and voids. Therefore, early damage detection of composite structures can help avoid catastrophic events and tragic consequences, such as airplane crashes, further demanding the development of robust structural health monitoring (SHM) algorithms. This study first reviews different non-destructive damage testing techniques, then investigates vibration-based damage-detection methods along with their respective pros and cons, and concludes with a thorough discussion of a nonlinear hybrid method termed the Vibro-Acoustic Modulation technique. Advanced signal processing, machine learning, and deep learning have been widely employed for solving damage-detection problems of composite structures. Therefore, all of these methods have been fully studied. Considering the wide use of a new generation of smart composites in different applications, a section is dedicated to these materials. At the end of this paper, some final remarks and suggestions for future work are presented.
Publisher: American Society of Civil Engineers
Date: 13-07-2009
DOI: 10.1061/41043(350)20
Publisher: Center for Open Science
Date: 20-10-2020
Abstract: This paper provides a state-of-the-art investigation of advances in data science in emerging economic applications. The analysis was performed on novel data science methods in four in idual classes of deep learning models, hybrid deep learning models, hybrid machine learning, and ensemble models. Application domains include a wide and erse range of economics research from the stock market, marketing, and e-commerce to corporate banking and cryptocurrency. Prisma method, a systematic literature review methodology, was used to ensure the quality of the survey. The findings reveal that the trends follow the advancement of hybrid models, which, based on the accuracy metric, outperform other learning algorithms. It is further expected that the trends will converge toward the advancements of sophisticated hybrid deep learning models.
Publisher: Elsevier BV
Date: 04-2017
Publisher: Wiley
Date: 12-2010
Publisher: Springer Science and Business Media LLC
Date: 25-09-2019
Publisher: Elsevier
Date: 2023
Publisher: Vilnius Gediminas Technical University
Date: 09-06-2015
DOI: 10.3846/13923730.2014.893910
Abstract: An innovative multi expression programming (MEP) approach is used to derive new predictive equations for tangent elastic modulus of normal strength concrete (NSC) and high strength concrete (HSC). Similar to several building codes, the modulus of elasticity of NSC and HSC is formulated in terms of concrete compressive strength. Furthermore, a generic model is developed for the estimation of the elastic modulus of both NSC and HSC. Comprehensive databases are gathered from the literature to develop the models. For more verification, a parametric analysis is carried out and discussed. The proposed formulas are found to be accurate for the prediction of the elastic modulus of NSC and HSC. The predictions made by the MEP-based models are more accurate than those obtained by the existing models.
Publisher: Elsevier BV
Date: 07-2023
Publisher: Elsevier BV
Date: 06-2021
Publisher: Wiley
Date: 21-11-2017
DOI: 10.1002/TAL.1449
Publisher: Springer Science and Business Media LLC
Date: 29-10-2019
Publisher: Elsevier BV
Date: 11-2022
Publisher: Springer Science and Business Media LLC
Date: 13-02-2013
Publisher: Elsevier BV
Date: 06-2014
Publisher: Elsevier BV
Date: 05-2023
Publisher: Center for Open Science
Date: 15-10-2020
Abstract: This paper provides a state-of-the-art investigation of advances in data science in emerging economic applications. The analysis was performed on novel data science methods in four in idual classes of deep learning models, hybrid deep learning models, hybrid machine learning, and ensemble models. Application domains include a wide and erse range of economics research from the stock market, marketing, and e-commerce to corporate banking and cryptocurrency. Prisma method, a systematic literature review methodology, was used to ensure the quality of the survey. The findings reveal that the trends follow the advancement of hybrid models, which, based on the accuracy metric, outperform other learning algorithms. It is further expected that the trends will converge toward the advancements of sophisticated hybrid deep learning models.
Publisher: Springer International Publishing
Date: 2018
Publisher: Elsevier BV
Date: 06-2012
Start Date: 2014
End Date: 2014
Funder: IEEE Foundation
View Funded ActivityStart Date: 2021
End Date: 2024
Funder: Australian Research Council
View Funded ActivityStart Date: 2014
End Date: 2014
Funder: University of Akron
View Funded ActivityStart Date: 2015
End Date: 2017
Funder: National Science Foundation - BEACON Center
View Funded ActivityStart Date: 05-2021
End Date: 08-2024
Amount: $395,775.00
Funder: Australian Research Council
View Funded Activity