ORCID Profile
0000-0003-1597-0950
Current Organisation
Bond University
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
Publisher: Wiley
Date: 25-11-2009
DOI: 10.1002/FOR.1153
Publisher: Oxford University Press (OUP)
Date: 2011
Publisher: Taru Publications
Date: 03-03-2016
Publisher: SAGE Publications
Date: 03-2016
Abstract: India has a long history of the Health Management Information and Evaluation System (HMIES). Though it has well served its purpose of administrative reporting, however, it has failed to provide relevant and sufficient information to users of health services, planners and policy makers as available information is fragmented, incomplete and sometimes inconsistent. The National Health Policies of 1983 and 2002 and the National Statistical Commission of India 2005 have laid down clear benchmarks for HMIES. In spite of several efforts in the past, the national HMIES does not fully conform to ‘International Data Quality Frameworks, Systems and Standard Practices’. In this article, efforts are made to compare information collection and governance system, its standardization and extent of utilization for decision-making in Australia and India and give recommendations to transform our national HMIES to be compatible with international standards, frameworks and practices.
Publisher: Springer Berlin Heidelberg
Date: 2010
Publisher: IEEE Comput. Soc
Date: 1999
Publisher: SAGE Publications
Date: 08-02-2015
Publisher: Wiley
Date: 03-1986
Publisher: IEEE
Date: 09-12-2021
Publisher: Taru Publications
Date: 04-05-2014
Publisher: IEEE
Date: 2006
Publisher: Virtus Interpress
Date: 2015
DOI: 10.22495/COCV12I3P7
Abstract: There have been scores of corporate failure all over the world due to poor corporate governance or lapse in well manage corporation at the board level due to this transparency, accountability, fiduciary duty, interest of shareholders, etc. are impinged on. Erosion of values, wisdom, righteousness, fairness, equanimity in judgment, etc. are appear to be possible attributes responsible for accelerating to corporate turpitude. Hence, this paper attempt to draw attention of the board members to look into Indian scriptures and harmonize them to achieve sustainable and effective good governance and accentuate on their potential in helping to fulfill board’s responsibilities effectively. It also discusses about principles and guidelines of Indian scriptures for good governance that can be adopted in today’s time
Publisher: Elsevier BV
Date: 10-1992
Publisher: IGI Global
Date: 13-05-2022
DOI: 10.4018/978-1-6684-6291-1.CH039
Abstract: Credit ratings are an important metric for business managers and a contributor to economic growth. Forecasting such ratings might be a suitable application of big data analytics. As machine learning is one of the foundations of intelligent big data analytics, this chapter presents a comparative analysis of traditional statistical models and popular machine learning models for the prediction of Moody's long-term corporate debt ratings. Machine learning techniques such as artificial neural networks, support vector machines, and random forests generally outperformed their traditional counterparts in terms of both overall accuracy and the Kappa statistic. The parametric models may be hindered by missing variables and restrictive assumptions about the underlying distributions in the data. This chapter reveals the relative effectiveness of non-parametric big data analytics to model a complex process that frequently arises in business, specifically determining credit ratings.
Publisher: Oxford University Press (OUP)
Date: 05-1998
Publisher: IGI Global
Date: 2002
DOI: 10.4018/978-1-930708-21-1.CH015
Abstract: Data mining has emerged as one of the hottest topics in recent years. It is an extraordinarily broad area and is growing in several directions. With the advancement of the Internet and cheap availability of powerful computers, data is flooding the market at a tremendous pace. However, the technology for navigating, exploring, visualizing and summarizing large databases are still in their infancy. The quantity and ersity of data available to make decisions has increased dramatically during the past decade. Large databases are being built to hold and deliver these data. Data mining is defined as the process of seeking interesting or valuable information within large data sets. Some ex les of data mining applications in the area of management science are analysis of direct-mailing strategies, sales data analysis for customer segmentation, credit card fraud detection, mass customization, etc. With the advancement of the Internet and World Wide Web, both management scientists and interested end-users can get large data sets for their research from this source. The Web not only contains a vast amount of useful information, but also provides a powerful infrastructure for communication and information sharing. For ex le, Ma, Liu and Wong (2000) have developed a system called DS-Web that uses the Web to help data mining. A recent survey on Web mining research can be seen in the paper by Kosala and Blockeel (2000).
Publisher: Elsevier BV
Date: 02-2011
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 03-2008
Publisher: Emerald
Date: 03-04-2017
Abstract: Rising trends in alcohol consumption and early drinking initiation pose serious health risks especially for adolescents. Learner’s prior knowledge about alcohol gained from the social surroundings and the media are important sources that can impact the learning outcomes in health education. The purpose of this paper is to map adolescents’ perceptions of alcohol in Punjab, India and how these perceptions are related to their attitudes towards their social surroundings and the media. The questionnaire was created after informal discussions with local people who consume alcohol and discussions with alcohol-related experts. Students from five schools ( n =379, average age=13.6 years) in the urban region of Punjab, India, filled in a questionnaire. Quantitative tests were performed on the questionnaire data. Summative content analysis was performed for the textbook content about alcohol from classes 1 to 10. Data suggest that students gain knowledge about alcohol from multiple sources, including society, the media and education. While society and the media can give misinformation, education did not provide them with factual scientific information about alcohol. Students from financially marginalized social surroundings experience the presence and use of alcohol more frequently they trust the media and celebrities somewhat unquestioningly and, hence, are more at-risk. All participants in informal discussions as well as all participating schools in the study were from urban regions. Data about in idual’s socio-economic conditions was not collected. This research investigates perceptions of alcohol that are derived from adolescents’ social surroundings, perceptions of the media and perceptions gained through educational guidance in a developing country. Such multi-dimensional investigations have not been conducted earlier.
Publisher: Wiley
Date: 1993
Publisher: Oxford University Press (OUP)
Date: 03-04-2023
Publisher: Eleyon Publishers
Date: 20-02-2020
DOI: 10.26524/JMS.2020.1
Abstract: The paper deals with the Random Forest, a popular classification machine learning algorithm to predict bankruptcy (distress) for Indian firms. Random Forest orders firms according to their propensity to default or their likelihood to become distressed. This is also useful to explain the association between the tendency of firm failure and its features. The results are analyzed vis-à-vis Tree Net. Both in-s le and out of s le estimations have been performed to compare Random Forest with Tree Net, which is a cutting edge data mining tool known to provide satisfactory estimation results. An exhaustive data set comprising companies from varied sectors have been included in the analysis. It is found that Tree Net procedure provides improved classification and predictive performance vis-à-vis Random Forest methodology consistently that may be utilized further by industry analysts and researchers alike for predictive purposes.
Publisher: Elsevier BV
Date: 09-2000
Publisher: Oxford University Press (OUP)
Date: 21-02-2023
Publisher: Oxford University Press (OUP)
Date: 09-02-2018
DOI: 10.1111/RSSA.12315
Abstract: Administrative data are becoming increasingly important. They are typically the side effect of some operational exercise and are often seen as having significant advantages over alternative sources of data. Although it is true that such data have merits, statisticians should approach the analysis of such data with the same cautious and critical eye as they approach the analysis of data from any other source. The paper identifies some statistical challenges, with the aim of stimulating debate about and improving the analysis of administrative data, and encouraging methodology researchers to explore some of the important statistical problems which arise with such data.
Publisher: Taru Publications
Date: 02-2013
Publisher: Elsevier BV
Date: 05-2005
Publisher: Springer Science and Business Media LLC
Date: 23-11-2019
Publisher: InTech
Date: 02-03-2012
DOI: 10.5772/35987
Publisher: IEEE
Date: 11-2014
Publisher: Taru Publications
Date: 06-2011
Publisher: Springer Science and Business Media LLC
Date: 22-06-2023
Publisher: Oxford University Press (OUP)
Date: 06-08-2017
DOI: 10.1111/RSSA.12276
Abstract: Decisions in statistical data analysis are often justified, criticized or avoided by using concepts of objectivity and subjectivity. We argue that the words ‘objective’ and ‘subjective’ in statistics discourse are used in a mostly unhelpful way, and we propose to replace each of them with broader collections of attributes, with objectivity replaced by transparency, consensus, impartiality and correspondence to observable reality, and subjectivity replaced by awareness of multiple perspectives and context dependence. Together with stability, these make up a collection of virtues that we think is helpful in discussions of statistical foundations and practice. The advantage of these reformulations is that the replacement terms do not oppose each other and that they give more specific guidance about what statistical science strives to achieve. Instead of debating over whether a given statistical method is subjective or objective (or normatively debating the relative merits of subjectivity and objectivity in statistical practice), we can recognize desirable attributes such as transparency and acknowledgement of multiple perspectives as complementary goals. We demonstrate the implications of our proposal with recent applied ex les from pharmacology, election polling and socio-economic stratification. The aim of the paper is to push users and developers of statistical methods towards more effective use of erse sources of information and more open acknowledgement of assumptions and goals.
Publisher: MDPI AG
Date: 16-05-2018
DOI: 10.3390/RISKS6020055
Publisher: Wiley
Date: 1980
Publisher: Elsevier BV
Date: 04-2016
Publisher: Taru Publications
Date: 05-1998
Publisher: Emerald
Date: 11-1998
DOI: 10.1108/13620439810234464
Abstract: In recent years there has been a revolutionary approach in business, governments and industry due to advances in computer technology and efficient statistical methods. This paper examines and offers practical suggestions for those in human resource improvement in output quality and the workplace atmosphere in light of Deming’s principles of human resource management.
Publisher: Oxford University Press (OUP)
Date: 29-12-2019
DOI: 10.1111/RSSA.12544
Publisher: Informa UK Limited
Date: 03-04-2017
Publisher: Virtus Interpress
Date: 2014
Abstract: The aim of the current study is to improve the performance of corporate through application of System Dynamics (SD) methodology. The paper discusses the importance of system dynamics modelling in enhancing corporate performance and how it shows the dynamic behaviour of the system. For this purpose a system dynamics model for an Indian Steel company has been prepared. The paper also covers a brief introduction of the system dynamics modelling, a brief narration of Steel Sector and the process adopted in modelling. Some of the important corporate performance parameters such as market share, revenue, employee’s strength, number of shareholders, installed capacity have been taken to reflect corporate behaviour. The behaviour of these performance parameters over time is used for both validation of the model as well as for reflecting their future pattern. The paper concludes that the SD modelling approach has high potential in understanding corporate performance behaviour and their by gaining insight into the corporate functioning and taking appropriate remedial steps for improving its performance.
Publisher: Oxford University Press (OUP)
Date: 04-2021
DOI: 10.1111/RSSA.12661
Publisher: Springer Berlin Heidelberg
Date: 2006
DOI: 10.1007/BFB0044275
Publisher: Wiley
Date: 27-03-2014
DOI: 10.1002/HPM.2244
Abstract: In view of high out-of-pocket costs and low spending even for basic healthcare for the poor employed in the unorganized sector, policy makers in India have turned their attention to developing a financing mechanism for social health insurance with the desire to provide quality care to the poor and economically disadvantaged. This study aims to assess and determine the disease profile, treatment expenditure and willingness to pay for health insurance among rickshaw pullers in Delhi. The study was conducted among 500 rickshaw pullers from five zones of the Municipal Corporation of Delhi, taking a s le of 100 from each zone. The average cost of treatment was Rs.505 for outpatient and Rs. 3200 for inpatient care. To finance the treatment expenditure, 27.5% of the respondents spent from their household savings, and 43% had to borrow funds. Any "spell of sickness" and "total expenditure on acute illness" were significantly (p < 0.01) associated with the willingness to pay for health insurance. Overall, the majority (83%) of participants were willing to pay for health insurance. The study provides the evidence for the need for urgent policy development by introducing a social health insurance package including wage losses for the vulnerable groups such as rickshaw pullers in the unorganized sector in India, which significantly contribute to pollution free and cheap transportation of community, tourists and commercial goods as well.
Publisher: Oxford University Press (OUP)
Date: 23-09-2017
DOI: 10.1111/RSSB.12233
Abstract: Statistical network modelling has focused on representing the graph as a discrete structure, namely the adjacency matrix. When assuming exchangeability of this array—which can aid in modelling, computations and theoretical analysis—the Aldous–Hoover theorem informs us that the graph is necessarily either dense or empty. We instead consider representing the graph as an exchangeable random measure and appeal to the Kallenberg representation theorem for this object. We explore using completely random measures (CRMs) to define the exchangeable random measure, and we show how our CRM construction enables us to achieve sparse graphs while maintaining the attractive properties of exchangeability. We relate the sparsity of the graph to the Lévy measure defining the CRM. For a specific choice of CRM, our graphs can be tuned from dense to sparse on the basis of a single parameter. We present a scalable Hamiltonian Monte Carlo algorithm for posterior inference, which we use to analyse network properties in a range of real data sets, including networks with hundreds of thousands of nodes and millions of edges.
Publisher: ACM
Date: 04-05-2018
Publisher: JSTOR
Date: 03-1985
DOI: 10.2307/3616444
Abstract: In order to obtain the area of a curve by the method of the integral calculus the following two conditions should hold: (a) the equation of the curve must be known, (b) the function representing the equation of the curve must be integrable.
Publisher: Taru Publications
Date: 02-2013
Publisher: Wiley
Date: 26-12-2020
DOI: 10.1111/ACFI.12742
Abstract: This study enables practitioners and researchers to make an informed choice for a financial statement fraud detection model, rather than defaulting to popular, yet dated, models. Using a specifically devised performance criterion, our newly configured ensemble outperforms 31 others in the most comprehensive comparison to date spanning parametric, non‐parametric, big data and ensemble techniques. We use a large set of input variables and holdout data relative to prior studies. We find empirical support for financial and non‐financial variables covering the three Fraud Triangle factors. New findings include fraud risk being reduced with more debt, likely from increased monitoring by creditors.
Publisher: Emerald
Date: 04-2001
DOI: 10.1108/03074350110767132
Abstract: Describes the ability of modern computer‐driven multivariate statistical analysis to deal with complex data and the development of statistical models for predicting financial distress. Applies multivariate techniques to 1986‐1991 financial ratio data for Australian failed (29) and nonfailed (42) companies and explains the techniques used (principal components analysis, factor analysis, discriminant analysis and cluster analysis) and the different types of information they can provide to help identify the distress levels of companies. Predicts that multivariate methods will change the way researchers think about problems and design their research. An unusually clear exposition of the application of multivariate methods.
Publisher: Elsevier BV
Date: 07-1992
Publisher: Oxford University Press (OUP)
Date: 10-2022
DOI: 10.1111/RSSA.12931
Publisher: IEEE
Date: 2003
Publisher: Oxford University Press (OUP)
Date: 07-2022
DOI: 10.1111/RSSB.12526
Publisher: IEEE
Date: 2004
Publisher: Cambridge University Press (CUP)
Date: 07-1998
DOI: 10.2307/3620427
Publisher: Oxford University Press (OUP)
Date: 07-12-2019
DOI: 10.1111/RSSA.12539
Publisher: Oxford University Press (OUP)
Date: 10-2022
DOI: 10.1111/RSSA.12932
Publisher: Taru Publications
Date: 02-2012
Publisher: Oxford University Press (OUP)
Date: 07-12-2020
DOI: 10.1111/RSSA.12536
Publisher: Oxford University Press (OUP)
Date: 24-05-2018
DOI: 10.1111/RSSA.12373
Publisher: IGI Global
Date: 2014
DOI: 10.4018/978-1-4666-4745-9.CH004
Abstract: In this chapter, the authors consider some of the issues regarding the rational choice decision framework in neoclassical economics and how it can particularly be found wanting in the absence of due consideration for some of the underlying critical neurobiological factors which govern decision making. They develop a critical decision problem and explore the scenario where the solution predicted by formal economic theory may be in conflict with the decision that actually occurs. Such conflict is especially relevant in the context of economic decision making in emerging markets where there can be a lack of trust in the system by the agents operating within it. Based on logically consistent arguments derived from the extant literature, the authors argue that non-consideration of underlying neurobiological factors is a direct cause of this conflict.
Publisher: Elsevier BV
Date: 07-1996
Publisher: IGI Global
Date: 2020
DOI: 10.4018/978-1-7998-1662-1.CH005
Abstract: Blended learning is a buzzword these days. Millions of dollars are spent by schools, colleges, and universities to encourage their academic staff members to use blended learning for improving teaching performance and student satisfaction. There is no clear-cut definition of blended learning, and the authors feel it is just a set of tools or pedagogy that can be used in face-to-face teaching as well as online teaching. In this chapter, the authors have discussed some of the blended learning tools used and developed by the authors to improve teaching in the area of statistics and data analysis.
Publisher: Oxford University Press (OUP)
Date: 21-09-2018
DOI: 10.1111/RSSA.12400
Publisher: IEEE
Date: 06-12-2022
Publisher: Oxford University Press (OUP)
Date: 06-10-2021
DOI: 10.1111/RSSA.12760
Publisher: Elsevier BV
Date: 07-1996
Publisher: Oxford University Press (OUP)
Date: 21-09-2018
DOI: 10.1111/RSSA.12364
Abstract: Small area estimation is a research area in official and survey statistics of great practical relevance for national statistical institutes and related organizations. Despite rapid developments in methodology and software, researchers and users would benefit from having practical guidelines for the process of small area estimation. We propose a general framework for the production of small area statistics that is governed by the principle of parsimony and is based on three broadly defined stages, namely specification, analysis and adaptation, and evaluation. Emphasis is given to the interaction between a user of small area statistics and the statistician in specifying the target geography and parameters in the light of the available data. Model-free and model-dependent methods are described with a focus on model selection and testing, model diagnostics and adaptations such as use of data transformations. Uncertainty measures and the use of model and design-based simulations for method evaluation are also at the centre of the paper. We illustrate the application of the proposed framework by using real data for the estimation of non-linear deprivation indicators. Linear statistics, e.g. averages, are included as special cases of the general framework.
Publisher: Elsevier
Date: 2003
Publisher: Emerald
Date: 07-2006
DOI: 10.1108/14757700610686426
Abstract: The purpose of this paper is to perform a comparative study of prediction performances of an artificial neutral network (ANN) model against a linear prediction model like a linear discriminant analysis (LDA) with regards to forecasting corporate credit ratings from financial statement data. The ANN model used in the study is a fully connected back‐propagation model with three layers of neurons. The paper uses a comparative approach whereby two prediction models – one based on ANN and the other based on LDA are developed using identically partitioned data set. The study found that the ANN model comprehensively outperformed the LDA model in both training and test partitions of the data set. While the LDA model may have been hindered by omitted variables this actually lends further credence to the ANN model showing that the latter is more robust in dealing with missing data. A possible drawback in the model implementation probably lies in the selection of the various accounting ratios. Perhaps future replications of this study should look more carefully at choosing the ratios after duly addressing the problems of collinearity and duplications more rigorously. The findings of this study imply that since ANN models can better deal with complex data sets and do not require restraining assumptions like linearity and normality, it may be overall a better approach in corporate credit rating forecasts that uses large financial data sets. This study brings out the effectiveness of non‐linear pattern learning models as compared to linear ones in forecasts of financial solvency. This goes on to further highlight the practical importance of the new breed of computational tools available to techno‐savvy financial analysts and also to the providers of corporate credit.
Publisher: Hindawi Limited
Date: 11-02-2007
DOI: 10.1155/2007/39460
Abstract: It has often been argued that there exists an underlying biological basis of utility functions. Taking this line of argument a step further in this paper, we have aimed to computationally demonstrate the biological basis of the Black-Scholes functional form as applied to classical option pricing and hedging theory. The evolutionary optimality of the classical Black-Scholes function has been computationally established by means of a haploid genetic algorithm model. The objective was to minimize the dynamic hedging error for a portfolio of assets that is built to replicate the payoff from a European multi-asset option. The functional form that is seen to evolve over successive generations which best attains this optimization objective is the classical Black-Scholes function extended to a multiasset scenario. Computational Exploration of the Biological Basis of Black-Scholes Expected Utility Function
Publisher: IEEE
Date: 02-2015
Publisher: IGI Global
Date: 2019
DOI: 10.4018/978-1-5225-7277-0.CH010
Abstract: Credit ratings are an important metric for business managers and a contributor to economic growth. Forecasting such ratings might be a suitable application of big data analytics. As machine learning is one of the foundations of intelligent big data analytics, this chapter presents a comparative analysis of traditional statistical models and popular machine learning models for the prediction of Moody's long-term corporate debt ratings. Machine learning techniques such as artificial neural networks, support vector machines, and random forests generally outperformed their traditional counterparts in terms of both overall accuracy and the Kappa statistic. The parametric models may be hindered by missing variables and restrictive assumptions about the underlying distributions in the data. This chapter reveals the relative effectiveness of non-parametric big data analytics to model a complex process that frequently arises in business, specifically determining credit ratings.
Publisher: Emerald
Date: 20-10-2021
DOI: 10.1108/JMLC-10-2021-0108
Abstract: The paper aims at developing a global ranking system determining a country's appeal as a destination for money laundering. This paper uses principal component analysis (PCA), with a mix of standardised and unstandardised components relating to attractiveness, economic freedom and money laundering risk to come up with an index of money laundering appeal. Four components relating to economic feasibility, financial liberty, government spending and tax regime are critical in influencing a country's money laundering appeal. This paper attempts to use a standardised and replicable methodology to condense into a single measure the complex and multifaceted phenomenon of a country's appeal as a destination for money laundering, thus avoiding the difficulty associated with precisely calculating illicit financial flows. The ranking system could be used to determine the destinations attractive for laundering money. Such information can be used to come up with more effective preventative strategies to combat phenomena responsible for the stagnation of economic growth through tax evasion, corruption and creation of non-competitive markets. It is the first attempt to use a statistical technique to understand the underlying components of a country's money laundering appeal.
Publisher: Informa UK Limited
Date: 2001
Publisher: Oxford University Press (OUP)
Date: 04-05-2023
Publisher: Oxford University Press (OUP)
Date: 10-05-2016
DOI: 10.1111/RSSA.12198
Publisher: Oxford University Press (OUP)
Date: 04-05-2023
Publisher: Oxford University Press (OUP)
Date: 04-05-2023
Publisher: Diva Enterprises Private Limited
Date: 2014
Publisher: Elsevier BV
Date: 04-1992
Publisher: Oxford University Press (OUP)
Date: 06-12-2020
DOI: 10.1111/RSSA.12637
Publisher: Wiley
Date: 27-10-2023
DOI: 10.1111/ACFI.13192
Publisher: Oxford University Press (OUP)
Date: 06-12-2021
DOI: 10.1111/RSSA.12636
Publisher: Springer Berlin Heidelberg
Date: 2005
DOI: 10.1007/11540007_38
Publisher: MDPI AG
Date: 09-10-2018
DOI: 10.3390/RISKS6040113
Abstract: The objective of the study is to perform corporate distress prediction for an emerging economy, such as India, where bankruptcy details of firms are not available. Exhaustive panel dataset extracted from Capital IQ has been employed for the purpose. Foremost, the study contributes by devising novel framework to capture incipient signs of distress for Indian firms by employing a combination of firm specific parameters. The strategy not only enables enlarging the s le of distressed firms but also enables to obtain robust results. The analysis applies both standard Logistic and Bayesian modeling to predict distressed firms in Indian corporate sector. Thereby, a comparison of predictive ability of the two approaches has been carried out. Both in-s le and out of s le evaluation reveal a consistently better predictive capability employing Bayesian methodology. The study provides useful structure to indicate the early signals of failure in Indian corporate sector that is otherwise limited in literature.
Publisher: IGI Global
Date: 2018
Abstract: In this paper the authors build on prior literature to develop an adaptive and time-varying metadata-enabled dynamic topic model (mDTM) and apply it to a large Weibo dataset using an online Gibbs s ler for parameter estimation. Their approach simultaneously captures the maximum number of inherent dynamic features of microblogs thereby setting it apart from other online document mining methods in the extant literature. In summary, the authors' results show a better performance of mDTM in terms of the quality of the mined information compared to prior research and showcases mDTM as a promising tool for the effective mining of microblogs in a rapidly changing global information space.
Publisher: IEEE
Date: 09-12-2021
Publisher: Emerald
Date: 20-03-2020
Abstract: The purpose of this study is to review the literature on money laundering and its related areas. The main objective is to identify any gaps in the literature and direct attention towards addressing them. A systematic review of the money laundering literature was conducted with an emphasis on the Pro-Quest, Scopus and Science-Direct databases. Broad research themes were identified after investigating the literature. The theme about the detection of money laundering was then further investigated. The major approaches of such detection are identified, as well as research gaps that could be addressed in future studies. The literature on money laundering can be classified into the following six broad areas: anti-money laundering framework and its effectiveness, the effect of money laundering on other fields and the economy, the role of actors and their relative importance, the magnitude of money laundering, new opportunities available for money laundering and detection of money laundering. Most studies about the detection of money laundering have focused on the use of innovative technologies, banking transactions or real estate- and trade-based money laundering. However, the literature on the detection of shell companies being explicitly used to launder funds is relatively scarce. This paper provides insights into an area related to money laundering where research is relatively scant. Shell companies incorporated in the UK alone were identified to be associated with laundering £80bn of stolen money between 2010 and 2014. The use of these entities to launder billions of dollars as witnessed through the laundromat schemes and several data leaks clearly indicate the need to focus on illicit financial flows through such entities.
Publisher: Springer Science and Business Media LLC
Date: 07-2018
Publisher: Informa UK Limited
Date: 1992
Publisher: Springer Science and Business Media LLC
Date: 12-1984
DOI: 10.1007/BF02481978
Publisher: Wiley
Date: 1992
Publisher: Emerald
Date: 03-04-2018
Abstract: Recent research has confirmed an underlying economic logic that connects each of the three vertices of the “fraud triangle” – a fundamental criminological model of factors driving occupational fraud. It is postulated that in the presence of economic motivation and opportunity (the first two vertices of the fraud triangle), the likelihood of an occupational fraud happening in an organization increases substantially if the overall organization culture is perceived as being slack toward fraud as it helps potential fraudsters in rationalizing their actions (rationalization being the third vertex of the fraud triangle). This paper aims to offer a viable approach for collecting and processing of data to identify and operationalize the key factors underlying employee perception of organization culture toward occupational frauds. This paper reports and analyses the results of a pilot study conducted using a convenience s ling approach to identify and operationalize the key factors underlying employee perception of organization culture with respect to occupational frauds. Given a very small s le size, a numerical testing technique based on the binomial distribution has been applied to test for significance of the proportion of respondents who agree that a lenient organizational culture toward fraud can create a rationalization for fraud. The null hypothesis assumed no difference in the population proportions between those who agree and those who disagree with the view that a lenient organizational culture toward fraud can create a rationalization for fraud. Based on the results of the numerical test, the null hypothesis is rejected in favor of the alternative that the population proportion of those who agree with the stated view in fact exceeded the proportion of those who disagreed. The obvious limitation is the very small size of the s le obtained because of an extremely low rate of response to the survey questionnaires. However, while of course a much bigger data set needs to be collected to develop a generalizable prediction model, the small s le was enough for the purpose of a pilot study. This paper makes two distinct practical contributions. First, it posits a viable empirical research plan for identifying, collecting and processing the right data to identify and operationalize the key underlying factors that capture an employee’s perception of organizational culture toward fraud as a basis for rationalizing an act of fraud. Second, it demonstrates via a small-scale pilot study that a more broad-based survey can indeed prove to be extremely useful in collating the sort of data that is needed to develop a computational model for predicting the likelihood of occupational fraud in any organization. This paper provides a viable framework which empirical researchers can follow to test some of the latest advances in the “fraud triangle” theory. It outlines a systematic and focused data collection method via a well-designed questionnaire that is effectively applicable to future surveys that are scaled up to collect data at a nationwide level.
Publisher: Oxford University Press (OUP)
Date: 05-03-2020
DOI: 10.1111/RSSA.12505
Abstract: Multiple-systems estimation is a key approach for quantifying hidden populations such as the number of victims of modern slavery. The UK Government published an estimate of 10000–13000 victims, constructed by the present author, as part of the strategy leading to the Modern Slavery Act 2015. This estimate was obtained by a stepwise multiple-systems method based on six lists. Further investigation shows that a small proportion of the possible models give rather different answers, and that other model fitting approaches may choose one of these. Three data sets collected in the field of modern slavery, together with a data set about the death toll in the Kosovo conflict, are used to investigate the stability and robustness of various multiple-systems-estimate methods. The crucial aspect is the way that interactions between lists are modelled, because these can substantially affect the results. Model selection and Bayesian approaches are considered in detail, in particular to assess their stability and robustness when applied to real modern slavery data. A new Markov chain Monte Carlo Bayesian approach is developed overall, this gives robust and stable results at least for the ex les considered. The software and data sets are freely and publicly available to facilitate wider implementation and further research.
Publisher: IEEE
Date: 2003
Publisher: Ovid Technologies (Wolters Kluwer Health)
Date: 25-02-2021
DOI: 10.1519/JSC.0000000000003945
Abstract: Salagaras, BS, Mackenzie-Shalders, KL, Nelson, MJ, Fraysse, F, Wycherley, TP, Slater, GJ, McLellan, C, Kumar, K, and Coffey, VG. Comparisons of daily energy intake vs. expenditure using the GeneActiv accelerometer in elite Australian Football athletes. J Strength Cond Res 35(5): 1273–1278, 2021—To assess validity of the GeneActiv accelerometer for use within an athlete population and compare energy expenditure (EE) with energy and macronutrient intake of elite Australian Football athletes during a competition week. The GeneActiv was first assessed for utility during high-intensity exercise with indirect calorimetry. Thereafter, 14 professional Australian Football athletes (age, 24 ± 4 [ SD ] y height, 1.87 ± 0.08 m body mass, 86 ± 10 kg) wore the accelerometer and had dietary intake assessed via dietitian-led 24-hour recalls throughout a continuous 7 days of competition period (including match day). There was a significant relationship between metabolic equivalents and GeneActiv g·min −1 ( SEE 1.77 METs r 2 = 0.64 p 0.0001). Across the in-season week a significant difference only occurred on days 3 and 4 (day 3: energy intake [EI] EI 137 ± 31 kJ·kg −1 ·d −1 11,763 ± 2,646 kJ·d −1 and EE: 186 ± 14 kJ·kg −1 ·d −1 16,018 ± 1973 kJ·d −1 p 0.05 d = −1.4 day 4: EI: 179 ± 44 kJ·kg −1 ·d −1 , 15,413 ± 3,960 kJ·d −1 and EE: 225 ± 42 kJ·kg −1 ·d −1 19,313 ± 3,072 kJ·d −1 d = −0.7). Carbohydrate intake (CI) was substantially below current sports nutrition recommendations on 6 of 7 days with deficits ranging from −1 to −7.2 g·kg −1 ·d −1 ( p 0.05), whereas daily protein and fat intake was adequate. In conclusion, the GeneActiv provides effective estimation of EE during weekly preparation for a professional team sport competition. Australian Footballers attempt to periodize dietary EI to varying daily training loads but fail to match expenditure on higher-training load days. Specific dietary strategies to increase CI may be beneficial to achieve appropriate energy balance and macronutrient distribution, particularly on days where athletes undertake multiple training sessions.
Publisher: Emerald
Date: 27-04-2018
Abstract: Financial distress is a socially and economically important problem that affects companies the world over. Having the power to better understand – and hence aid businesses from failing, has the potential to save not only the company, but also potentially prevent economies from sustained downturn. Although Islamic banks constitute a fraction of total banking assets, their importance have been substantially increasing, as their asset growth rate has surpassed that of conventional banks in recent years. The paper aims to discuss these issues. This paper uses a data set comprising 101 international publicly listed Islamic banks to work on advancing financial distress prediction (FDP) by utilising cutting-edge stochastic models, namely decision trees, stochastic gradient boosting and random forests. The most important variables pertaining to forecasting corporate failure are determined from an initial set of 18 variables. The results indicate that the “Working Capital/Total Assets” ratio is the most crucial variable relating to forecasting financial distress using both the traditional “Altman Z-Score” and the “Altman Z-Score for Service Firms” methods. However, using the “Standardised Profits” method, the “Return on Revenue” ratio was found to be the most important variable. This provides empirical evidence to support the recommendations made by Basel Accords for assessing a bank’s capital risks, specifically in relation to the application to Islamic banking. These findings provide a valuable addition to the limited literature surrounding Islamic banking in general, and FDP pertaining to Islamic banking in particular, by showcasing the most pertinent variables in forecasting financial distress so that appropriate proactive actions can be taken.
Publisher: CRC Press
Date: 26-07-2021
Publisher: Elsevier BV
Date: 09-2023
Publisher: IEEE
Date: 06-12-2022
Publisher: Taru Publications
Date: 17-02-2018
Publisher: JSTOR
Date: 1986
DOI: 10.2307/2981553
Publisher: Virtus Interpress
Date: 2015
Abstract: The paper aims to augment good corporate governance as a whole with the efficiency and effectiveness of system dynamics via a system dynamics model. The majority of study of corporate governance focus on financial issue, ownership, agency theory etc. rather than analyzing the relation of all aspects associated to corporate governance system as a whole. This study aims to address this gap by focusing on corporate governance in a holistic manner. The value is determined as two-fold: i) It is possible to understand the importance of system dynamics methodology and ii) It can help the organization to quantify corporate governance for development of organization in holistic manner.
Publisher: Scientific Research Publishing, Inc.
Date: 2018
Publisher: Elsevier BV
Date: 2015
Publisher: Oxford University Press (OUP)
Date: 11-10-2016
DOI: 10.1111/RSSB.12167
Abstract: What is the difference between a prediction that is made with a causal model and that with a non-causal model? Suppose that we intervene on the predictor variables or change the whole environment. The predictions from a causal model will in general work as well under interventions as for observational data. In contrast, predictions from a non-causal model can potentially be very wrong if we actively intervene on variables. Here, we propose to exploit this invariance of a prediction under a causal model for causal inference: given different experimental settings (e.g. various interventions) we collect all models that do show invariance in their predictive accuracy across settings and interventions. The causal model will be a member of this set of models with high probability. This approach yields valid confidence intervals for the causal relationships in quite general scenarios. We examine the ex le of structural equation models in more detail and provide sufficient assumptions under which the set of causal predictors becomes identifiable. We further investigate robustness properties of our approach under model misspecification and discuss possible extensions. The empirical properties are studied for various data sets, including large-scale gene perturbation experiments.
Publisher: Czech Geological Survey
Date: 20-08-2018
Publisher: IEEE
Date: 03-03-2023
Publisher: Taru Publications
Date: 07-1999
Publisher: Walter de Gruyter GmbH
Date: 12-2018
Abstract: A potential health crisis looms large in the Punjab, India where alcohol consumption has risen dramatically. Adolescents are especially vulnerable to the toxic effects of alcohol. This empirical study presents a pedagogical intervention, Children as Agents of Social Change (CASC), which aimed to raise awareness about the effects of alcohol using an ICT-supported educational dialogue among adolescent students and alcohol-experts from multiple domains. Primary data consists of pre- and post-test questionnaires from the control and experimental groups (N=379) and an interview of the teacher-in-charge of one experimental school. Results indicate that the intervention significantly improved students’ scientific knowledge about alcohol changed their attitudes towards media and celebrity promotion of alcohol and enabled them to surmount the odds to spread information - acquired during the CASC intervention- to people outside the school, including adult drinkers. Learner-centric pedagogy combined with ICT clearly lified transformative learning. CASC appears to be a promising approach in Education for Sustainable Development (ESD). It can be used for multiple sustainability issues.
Publisher: Oxford University Press (OUP)
Date: 30-05-2017
DOI: 10.1111/RSSA.12290
Location: United Kingdom of Great Britain and Northern Ireland
No related grants have been discovered for Kuldeep Kumar.