ORCID Profile
0000-0003-4745-9663
Current Organisations
Xinjiang University
,
University of Zurich
,
Bangor University Bangor Business School
,
Bond University
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
Publisher: Wiley
Date: 25-11-2009
DOI: 10.1002/FOR.1153
Publisher: Elsevier BV
Date: 06-2023
Publisher: IGI Global
Date: 2020
DOI: 10.4018/978-1-7998-1662-1.CH005
Abstract: Blended learning is a buzzword these days. Millions of dollars are spent by schools, colleges, and universities to encourage their academic staff members to use blended learning for improving teaching performance and student satisfaction. There is no clear-cut definition of blended learning, and the authors feel it is just a set of tools or pedagogy that can be used in face-to-face teaching as well as online teaching. In this chapter, the authors have discussed some of the blended learning tools used and developed by the authors to improve teaching in the area of statistics and data analysis.
Publisher: SAGE Publications
Date: 02-02-2021
Abstract: This article analyses how the financial literacy of elderly people affects their decisions on the adoption of various financial strategies. Multiple mediator models with bootstrap techniques are used to identify the mediating mechanisms of financial concerns that transmit the effects of financial literacy onto specific financial strategies. We find (1) financial concerns mediate the majority of financial literacy-strategy nexuses specifically, financially illiterate people are more likely to have financial concerns and are more likely to cut back on spending, seek job opportunities, increase debts and downsize or sell their residence as a result (2) financially literate people are more likely to seek professional financial advice, purchase a life annuity, contribute more to superannuation and invest more conservatively, regardless of their concerns. Our findings suggest professional advisors and robo-advisor developers take into account financial concerns when recommending advice. JEL Classification: D14, J14, J26, I31, G11
Publisher: Wiley
Date: 22-10-2020
DOI: 10.1111/ACFI.12708
Abstract: This study models the term structure of the European Union Emissions Trading Scheme. The one‐factor geometric Brownian motion model of Abadie and Chamorro is replicated using the data now available and then compared with a two‐factor short‐term/long‐term (STLT) stochastic model. The STLT model has the better statistical fit to the term structure of European Union Allowances (EUAs). A real options analysis of the value of the option to retrofit carbon capture and storage shows that forecasting phase four EUAs with the STLT model almost triples the estimated project net present value and lowers investment trigger prices by approximately 24 percent.
Publisher: Oxford University Press (OUP)
Date: 05-07-2021
DOI: 10.1111/RSSA.12727
Publisher: Springer International Publishing
Date: 2018
Publisher: Elsevier BV
Date: 08-2023
Publisher: Elsevier BV
Date: 05-2022
Publisher: Springer International Publishing
Date: 2020
Publisher: Springer Science and Business Media LLC
Date: 24-02-2009
Publisher: IGI Global
Date: 13-05-2022
DOI: 10.4018/978-1-6684-6291-1.CH039
Abstract: Credit ratings are an important metric for business managers and a contributor to economic growth. Forecasting such ratings might be a suitable application of big data analytics. As machine learning is one of the foundations of intelligent big data analytics, this chapter presents a comparative analysis of traditional statistical models and popular machine learning models for the prediction of Moody's long-term corporate debt ratings. Machine learning techniques such as artificial neural networks, support vector machines, and random forests generally outperformed their traditional counterparts in terms of both overall accuracy and the Kappa statistic. The parametric models may be hindered by missing variables and restrictive assumptions about the underlying distributions in the data. This chapter reveals the relative effectiveness of non-parametric big data analytics to model a complex process that frequently arises in business, specifically determining credit ratings.
Publisher: Wiley
Date: 14-04-2019
DOI: 10.1111/ACFI.12362
Publisher: Emerald
Date: 27-04-2018
Abstract: Financial distress is a socially and economically important problem that affects companies the world over. Having the power to better understand – and hence aid businesses from failing, has the potential to save not only the company, but also potentially prevent economies from sustained downturn. Although Islamic banks constitute a fraction of total banking assets, their importance have been substantially increasing, as their asset growth rate has surpassed that of conventional banks in recent years. The paper aims to discuss these issues. This paper uses a data set comprising 101 international publicly listed Islamic banks to work on advancing financial distress prediction (FDP) by utilising cutting-edge stochastic models, namely decision trees, stochastic gradient boosting and random forests. The most important variables pertaining to forecasting corporate failure are determined from an initial set of 18 variables. The results indicate that the “Working Capital/Total Assets” ratio is the most crucial variable relating to forecasting financial distress using both the traditional “Altman Z-Score” and the “Altman Z-Score for Service Firms” methods. However, using the “Standardised Profits” method, the “Return on Revenue” ratio was found to be the most important variable. This provides empirical evidence to support the recommendations made by Basel Accords for assessing a bank’s capital risks, specifically in relation to the application to Islamic banking. These findings provide a valuable addition to the limited literature surrounding Islamic banking in general, and FDP pertaining to Islamic banking in particular, by showcasing the most pertinent variables in forecasting financial distress so that appropriate proactive actions can be taken.
Publisher: IGI Global
Date: 2019
DOI: 10.4018/978-1-5225-7277-0.CH010
Abstract: Credit ratings are an important metric for business managers and a contributor to economic growth. Forecasting such ratings might be a suitable application of big data analytics. As machine learning is one of the foundations of intelligent big data analytics, this chapter presents a comparative analysis of traditional statistical models and popular machine learning models for the prediction of Moody's long-term corporate debt ratings. Machine learning techniques such as artificial neural networks, support vector machines, and random forests generally outperformed their traditional counterparts in terms of both overall accuracy and the Kappa statistic. The parametric models may be hindered by missing variables and restrictive assumptions about the underlying distributions in the data. This chapter reveals the relative effectiveness of non-parametric big data analytics to model a complex process that frequently arises in business, specifically determining credit ratings.
Publisher: Emerald
Date: 20-10-2021
DOI: 10.1108/JMLC-10-2021-0108
Abstract: The paper aims at developing a global ranking system determining a country's appeal as a destination for money laundering. This paper uses principal component analysis (PCA), with a mix of standardised and unstandardised components relating to attractiveness, economic freedom and money laundering risk to come up with an index of money laundering appeal. Four components relating to economic feasibility, financial liberty, government spending and tax regime are critical in influencing a country's money laundering appeal. This paper attempts to use a standardised and replicable methodology to condense into a single measure the complex and multifaceted phenomenon of a country's appeal as a destination for money laundering, thus avoiding the difficulty associated with precisely calculating illicit financial flows. The ranking system could be used to determine the destinations attractive for laundering money. Such information can be used to come up with more effective preventative strategies to combat phenomena responsible for the stagnation of economic growth through tax evasion, corruption and creation of non-competitive markets. It is the first attempt to use a statistical technique to understand the underlying components of a country's money laundering appeal.
Publisher: Wiley
Date: 26-12-2020
DOI: 10.1111/ACFI.12742
Abstract: This study enables practitioners and researchers to make an informed choice for a financial statement fraud detection model, rather than defaulting to popular, yet dated, models. Using a specifically devised performance criterion, our newly configured ensemble outperforms 31 others in the most comprehensive comparison to date spanning parametric, non‐parametric, big data and ensemble techniques. We use a large set of input variables and holdout data relative to prior studies. We find empirical support for financial and non‐financial variables covering the three Fraud Triangle factors. New findings include fraud risk being reduced with more debt, likely from increased monitoring by creditors.
Publisher: Wiley
Date: 27-09-2020
DOI: 10.1111/ACFI.12543
Publisher: Wiley
Date: 30-09-2020
DOI: 10.1111/ACFI.12545
Publisher: Elsevier BV
Date: 09-2023
Publisher: Springer Science and Business Media LLC
Date: 23-11-2019
Publisher: Oxford University Press (OUP)
Date: 06-08-2017
DOI: 10.1111/RSSA.12276
Abstract: Decisions in statistical data analysis are often justified, criticized or avoided by using concepts of objectivity and subjectivity. We argue that the words ‘objective’ and ‘subjective’ in statistics discourse are used in a mostly unhelpful way, and we propose to replace each of them with broader collections of attributes, with objectivity replaced by transparency, consensus, impartiality and correspondence to observable reality, and subjectivity replaced by awareness of multiple perspectives and context dependence. Together with stability, these make up a collection of virtues that we think is helpful in discussions of statistical foundations and practice. The advantage of these reformulations is that the replacement terms do not oppose each other and that they give more specific guidance about what statistical science strives to achieve. Instead of debating over whether a given statistical method is subjective or objective (or normatively debating the relative merits of subjectivity and objectivity in statistical practice), we can recognize desirable attributes such as transparency and acknowledgement of multiple perspectives as complementary goals. We demonstrate the implications of our proposal with recent applied ex les from pharmacology, election polling and socio-economic stratification. The aim of the paper is to push users and developers of statistical methods towards more effective use of erse sources of information and more open acknowledgement of assumptions and goals.
Publisher: Springer International Publishing
Date: 2018
Publisher: MDPI AG
Date: 16-05-2018
DOI: 10.3390/RISKS6020055
Publisher: Informa UK Limited
Date: 27-02-2022
Publisher: Elsevier BV
Date: 09-2019
DOI: 10.1016/J.IJMEDINF.2019.07.002
Abstract: Assessment of the performance of Intensive Care Units (ICU) is of vital importance for an effective healthcare system. Such assessment ensures that the limited resources of the healthcare system are allocated where they are most needed. Severity scoring systems are employed for this purpose and improving these systems is a continuing area of research which has focused on the use of more complex techniques and new variables. This paper investigates whether scoring systems could be improved through use of metrics which better summarise the high frequency data collected by automated systems for patients in the ICU. 3128 admissions to the Gold Coast University Hospital ICU are used to construct three logistic regressions based on the most widely used scoring system (APACHE III) to compare performance with and without predictors leveraging available high frequency information. Performance is assessed based on model accuracy, calibration, and discrimination. High frequency information was considered for existing pulse and mean arterial pressure physiology fields and resulting models compared against a baseline logistic regression using only APACHE III physiology variables. Model discrimination and accuracy were better for models which included high frequency predictors, with calibration remaining good in all cases. The most influential high frequency summaries were the number of turning points in a patient's mean arterial pressure or pulse in the first 24 h of ICU admission. The findings indicate that scoring systems can be improved by better accounting for high frequency data.
Publisher: Elsevier BV
Date: 2015
Publisher: Springer Science and Business Media LLC
Date: 27-09-2022
DOI: 10.1007/S00521-022-07805-1
Abstract: This paper extends a series of deep learning models developed on US equity data to the Australian market. The model architectures are retrained, without structural modification, and tested on Australian data comparable with the original US data. Relative to the original US-based results, the retrained models are statistically less accurate at predicting next day returns. The models were also modified in the standard train/validate manner on the Australian data, and these models yielded significantly better predictive results on the holdout data. It was determined that the best-performing models were a CNN and LSTM, attaining highly significant Z-scores of 6.154 and 8.789, respectively. Due to the relative structural similarity across all models, the improvement is ascribed to regional influences within the respective training data sets. Such unique regional differences are consistent with views in the literature stating that deep learning models in computational finance that are developed and trained on a single market will always contain market-specific bias. Given this finding, future research into the development of deep learning models trained on global markets is recommended.
Publisher: Wiley
Date: 27-10-2023
DOI: 10.1111/ACFI.13192
Publisher: Emerald
Date: 09-12-2020
Abstract: The pitching research template (PRT) is designed to help pitchers identify the core elements that form the framework of any research project. This paper aims to provide a brief commentary on an application of the PRT to pitch an environmental finance research topic with a personal reflection on the pitch exercise discussed. This paper applies the PRT developed by Faff (2015, 2019) to a research project on estimating the strength of carbon pricing signals under the European Union Emissions Trading Scheme. The PRT is found to be a valuable tool to refine broad ideas into impactful and novel research contributions. The PRT is recommended for use by all academics regardless of field and particularly PhD students to structure and communicate their research ideas. The PRT is found to be particularly well suited to pitch replication studies, as it effectively summarizes both the “idea” and proposed “twist” of a replication study. This letter is a reflection on a research teams experience with applying the PRT to pitch a replication study at the 2020 Accounting and Finance Association of Australia and New Zealand event. This event focused on replicable research and was a unique opportunity for research teams to pitch their replication research ideas.
Publisher: Elsevier BV
Date: 11-2019
Publisher: Wiley
Date: 12-05-2018
DOI: 10.1111/ACFI.12373
Publisher: Emerald
Date: 02-2018
DOI: 10.1016/J.ACCLIT.2017.05.003
Abstract: This paper analyses the use of big data techniques in auditing, and finds that the practice is not as widespread as it is in other related fields. We first introduce contemporary big data techniques to promote understanding of their potential application. Next, we review existing research on big data in accounting and finance. In addition to auditing, our analysis shows that existing research extends across three other genealogies: financial distress modelling, financial fraud modelling, and stock market prediction and quantitative modelling. Auditing is lagging behind the other research streams in the use of valuable big data techniques. A possible explanation is that auditors are reluctant to use techniques that are far ahead of those adopted by their clients, but we refute this argument. We call for more research and a greater alignment to practice. We also outline future opportunities for auditing in the context of real-time information and in collaborative platforms and peer-to-peer marketplaces.
Publisher: Springer Science and Business Media LLC
Date: 10-2022
DOI: 10.1007/S00521-022-07792-3
Abstract: The purpose of this work is to compare predictive performance of neural networks trained using the relatively novel technique of training single hidden layer feedforward neural networks (SFNN), called Extreme Learning Machine (ELM), with commonly used backpropagation-trained recurrent neural networks (RNN) as applied to the task of financial market prediction. Evaluated on a set of large capitalisation stocks on the Australian market, specifically the components of the ASX20, ELM-trained SFNNs showed superior performance over RNNs for in idual stock price prediction. While this conclusion of efficacy holds generally, long short-term memory (LSTM) RNNs were found to outperform for a small subset of stocks. Subsequent analysis identified several areas of performance deviations which we highlight as potentially fruitful areas for further research and performance improvement.
Publisher: Elsevier BV
Date: 02-2022
Publisher: Springer Science and Business Media LLC
Date: 09-04-2019
Publisher: Emerald
Date: 21-09-2023
Publisher: Georg Thieme Verlag KG
Date: 2018
Abstract: Background Various tasks within health care processes are repetitive and time-consuming, requiring personnel who could be better utilized elsewhere. The task of assigning clinical urgency categories to internal patient referrals is one such case of a time-consuming process, which may be amenable to automation through the application of text mining and natural language processing (NLP) techniques. Objective This article aims to trial and evaluate a pilot study for the first component of the task—determining reasons for referrals. Methods Text is extracted from scanned patient referrals before being processed to remove nonsensical symbols and identify key information. The processed data are compared against a list of conditions that represent possible reasons for referral. Similarity scores are used as a measure of overlap in terms used in the processed data and the condition list. Results This pilot study was successful, and results indicate that it would be valuable for future research to develop a more sophisticated classification model for determining reasons for referrals. Issues encountered in the pilot study and methods of addressing them were outlined and should be of use to researchers working on similar problems. Conclusion This pilot study successfully demonstrated that there is potential for automating the assignment of reasons for referrals and provides a foundation for further work to build on. This study also outlined a potential application of text mining and NLP to automating a manual task in hospitals to save time of human resources.
Publisher: Emerald
Date: 20-03-2020
Abstract: The purpose of this study is to review the literature on money laundering and its related areas. The main objective is to identify any gaps in the literature and direct attention towards addressing them. A systematic review of the money laundering literature was conducted with an emphasis on the Pro-Quest, Scopus and Science-Direct databases. Broad research themes were identified after investigating the literature. The theme about the detection of money laundering was then further investigated. The major approaches of such detection are identified, as well as research gaps that could be addressed in future studies. The literature on money laundering can be classified into the following six broad areas: anti-money laundering framework and its effectiveness, the effect of money laundering on other fields and the economy, the role of actors and their relative importance, the magnitude of money laundering, new opportunities available for money laundering and detection of money laundering. Most studies about the detection of money laundering have focused on the use of innovative technologies, banking transactions or real estate- and trade-based money laundering. However, the literature on the detection of shell companies being explicitly used to launder funds is relatively scarce. This paper provides insights into an area related to money laundering where research is relatively scant. Shell companies incorporated in the UK alone were identified to be associated with laundering £80bn of stolen money between 2010 and 2014. The use of these entities to launder billions of dollars as witnessed through the laundromat schemes and several data leaks clearly indicate the need to focus on illicit financial flows through such entities.
Publisher: Springer Science and Business Media LLC
Date: 27-02-2019
Location: United Kingdom of Great Britain and Northern Ireland
No related grants have been discovered for Hang Chen.