ORCID Profile
0000-0002-2600-3379
Current Organisation
Deakin University
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
Publisher: Springer Singapore
Date: 2018
Publisher: MDPI AG
Date: 13-05-2019
DOI: 10.3390/E21050489
Abstract: Over recent decades, the rapid growth in data makes ever more urgent the quest for highly scalable Bayesian networks that have better classification performance and expressivity (that is, capacity to respectively describe dependence relationships between attributes in different situations). To reduce the search space of possible attribute orders, k-dependence Bayesian classifier (KDB) simply applies mutual information to sort attributes. This sorting strategy is very efficient but it neglects the conditional dependencies between attributes and is sub-optimal. In this paper, we propose a novel sorting strategy and extend KDB from a single restricted network to unrestricted ensemble networks, i.e., unrestricted Bayesian classifier (UKDB), in terms of Markov blanket analysis and target learning. Target learning is a framework that takes each unlabeled testing instance P as a target and builds a specific Bayesian model Bayesian network classifiers (BNC) P to complement BNC T learned from training data T . UKDB respectively introduced UKDB P and UKDB T to flexibly describe the change in dependence relationships for different testing instances and the robust dependence relationships implicated in training data. They both use UKDB as the base classifier by applying the same learning strategy while modeling different parts of the data space, thus they are complementary in nature. The extensive experimental results on the Wisconsin breast cancer database for case study and other 10 datasets by involving classifiers with different structure complexities, such as Naive Bayes (0-dependence), Tree augmented Naive Bayes (1-dependence) and KDB (arbitrary k-dependence), prove the effectiveness and robustness of the proposed approach.
Publisher: Springer Science and Business Media LLC
Date: 20-06-2013
Publisher: Springer Science and Business Media LLC
Date: 24-12-2011
Publisher: Society for Industrial & Applied Mathematics (SIAM)
Date: 2014
DOI: 10.1137/130926808
Publisher: Elsevier BV
Date: 05-2009
Publisher: Springer Science and Business Media LLC
Date: 11-02-2012
Publisher: Elsevier BV
Date: 06-2011
Publisher: Informa UK Limited
Date: 04-01-2016
Publisher: World Scientific Pub Co Pte Ltd
Date: 03-2009
DOI: 10.1142/S1793557109000042
Abstract: Optimization of multiple classifiers is an important problem in data mining. We introduce additional structure on the class sets of the classifiers using string rewriting systems with a convenient matrix representation. The aim of the present paper is to develop an efficient algorithm for the optimization of the number of errors of in idual classifiers, which can be corrected by these multiple classifiers.
Publisher: Springer Science and Business Media LLC
Date: 05-01-2014
Publisher: Springer Science and Business Media LLC
Date: 06-07-2011
Publisher: Public Library of Science (PLoS)
Date: 23-07-2018
Publisher: IEEE
Date: 02-2009
Publisher: Informa UK Limited
Date: 07-12-2015
Publisher: Hindawi Limited
Date: 17-06-2009
DOI: 10.1155/2009/125308
Abstract: The aim of this paper is to present modified neural network algorithms to predict whether it is best to buy, hold, or sell shares (trading signals) of stock market indices. Most commonly used classification techniques are not successful in predicting trading signals when the distribution of the actual trading signals, among these three classes, is imbalanced. The modified network algorithms are based on the structure of feedforward neural networks and a modified Ordinary Least Squares (OLSs) error function. An adjustment relating to the contribution from the historical data used for training the networks and penalisation of incorrectly classified trading signals were accounted for, when modifying the OLS function. A global optimization algorithm was employed to train these networks. These algorithms were employed to predict the trading signals of the Australian All Ordinary Index. The algorithms with the modified error functions introduced by this study produced better predictions.
Publisher: IEEE
Date: 08-2010
Publisher: Springer Science and Business Media LLC
Date: 05-06-2015
Publisher: Elsevier BV
Date: 04-2011
Publisher: Walter de Gruyter GmbH
Date: 12-2013
Abstract: Naive Bayes is among the simplest probabilistic classifiers. It often performs surprisingly well in many real world applications, despite the strong assumption that all features are conditionally independent given the class. In the learning process of this classifier with the known structure, class probabilities and conditional probabilities are calculated using training data, and then values of these probabilities are used to classify new observations. In this paper, we introduce three novel optimization models for the naive Bayes classifier where both class probabilities and conditional probabilities are considered as variables. The values of these variables are found by solving the corresponding optimization problems. Numerical experiments are conducted on several real world binary classification data sets, where continuous features are discretized by applying three different methods. The performances of these models are compared with the naive Bayes classifier, tree augmented naive Bayes, the SVM, C4.5 and the nearest neighbor classifier. The obtained results demonstrate that the proposed models can significantly improve the performance of the naive Bayes classifier, yet at the same time maintain its simple structure.
Publisher: IEEE
Date: 06-2010
Publisher: MDPI AG
Date: 06-12-2019
DOI: 10.3390/APP9245334
Abstract: Financial market prediction attracts immense interest among researchers nowadays due to rapid increase in the investments of financial markets in the last few decades. The stock market is one of the leading financial markets due to importance and interest of many stakeholders. With the development of machine learning techniques, the financial industry thrived with the enhancement of the forecasting ability. Probabilistic neural network (PNN) is a promising machine learning technique which can be used to forecast financial markets with a higher accuracy. A major limitation of PNN is the assumption of Gaussian distribution as the distribution of input variables which is violated with respect to financial data. The main objective of this study is to improve the standard PNN by incorporating a proper multivariate distribution as the joint distribution of input variables and addressing the multi-class imbalanced problem persisting in the directional prediction of the stock market. This model building process is illustrated and tested with daily close prices of three stock market indices: AORD, GSPC and ASPI and related financial market indices. Results proved that scaled t distribution with location, scale and shape parameters can be used as more suitable distribution for financial return series. Global optimization methods are more appropriate to estimate better parameters of multivariate distributions. The global optimization technique used in this study is capable of estimating parameters with considerably high dimensional multivariate distributions. The proposed PNN model, which considers multivariate scaled t distribution as the joint distribution of input variables, exhibits better performance than the standard PNN model. The ensemble technique: multi-class unders ling based bagging (MCUB) was introduced to handle class imbalanced problem in PNNs is capable enough to resolve multi-class imbalanced problem persisting in both standard and proposed PNNs. Final model proposed in the study with proposed PNN and proposed MCUB technique is competent in forecasting the direction of a given stock market index with higher accuracy, which helps stakeholders of stock markets make accurate decisions.
Publisher: Springer New York
Date: 2009
Publisher: Springer Science and Business Media LLC
Date: 06-08-2009
Publisher: Elsevier BV
Date: 04-2009
Publisher: MDPI AG
Date: 26-05-2019
DOI: 10.3390/E21050537
Abstract: Machine learning techniques have shown superior predictive power, among which Bayesian network classifiers (BNCs) have remained of great interest due to its capacity to demonstrate complex dependence relationships. Most traditional BNCs tend to build only one model to fit training instances by analyzing independence between attributes using conditional mutual information. However, for different class labels, the conditional dependence relationships may be different rather than invariant when attributes take different values, which may result in classification bias. To address this issue, we propose a novel framework, called discriminatory target learning, which can be regarded as a tradeoff between probabilistic model learned from unlabeled instance at the uncertain end and that learned from labeled training data at the certain end. The final model can discriminately represent the dependence relationships hidden in unlabeled instance with respect to different possible class labels. Taking k-dependence Bayesian classifier as an ex le, experimental comparison on 42 publicly available datasets indicated that the final model achieved competitive classification performance compared to state-of-the-art learners such as Random forest and averaged one-dependence estimators.
Publisher: Springer Nature Singapore
Date: 2022
Publisher: Elsevier BV
Date: 09-2014
Publisher: Elsevier BV
Date: 06-2012
Publisher: IEEE
Date: 06-2010
Publisher: Springer Science and Business Media LLC
Date: 04-10-2015
Publisher: Springer Berlin Heidelberg
Date: 2008
Publisher: Springer Science and Business Media LLC
Date: 29-01-2010
Publisher: Springer Science and Business Media LLC
Date: 15-06-2012
Publisher: Springer International Publishing
Date: 2018
Publisher: The Scientific and Technological Research Council of Turkey (TUBITAK-ULAKBIM) - DIGITAL COMMONS JOURNALS
Date: 2017
DOI: 10.3906/MAT-1505-38
Publisher: Society for Industrial & Applied Mathematics (SIAM)
Date: 2009
DOI: 10.1137/080738106
Publisher: Informa UK Limited
Date: 02-2013
No related grants have been discovered for Musa Mammadov.