ORCID Profile
0000-0002-7205-8821
Current Organisation
Queensland University of Technology
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
In Research Link Australia (RLA), "Research Topics" refer to ANZSRC FOR and SEO codes. These topics are either sourced from ANZSRC FOR and SEO codes listed in researchers' related grants or generated by a large language model (LLM) based on their publications.
Information Systems | Information Systems Development Methodologies | Information Systems Management | Applied Mathematics | Operations Research | CAD/CAM Systems | Conceptual Modelling |
Expanding Knowledge in the Information and Computing Sciences | Application Tools and System Utilities | Social Structure and Health | Computer Software and Services not elsewhere classified | Health Policy Economic Outcomes
Publisher: Elsevier BV
Date: 04-2017
Publisher: Association for Computing Machinery (ACM)
Date: 23-01-2015
DOI: 10.1145/2629446
Abstract: Business process analysis and process mining, particularly within the health care domain, remain under-utilized. Applied research that employs such techniques to routinely collected health care data enables stakeholders to empirically investigate care as it is delivered by different health providers. However, cross-organizational mining and the comparative analysis of processes present a set of unique challenges in terms of ensuring population and activity comparability, visualizing the mined models, and interpreting the results. Without addressing these issues, health providers will find it difficult to use process mining insights, and the potential benefits of evidence-based process improvement within health will remain unrealized. In this article, we present a brief introduction on the nature of health care processes, a review of process mining in health literature, and a case study conducted to explore and learn how health care data and cross-organizational comparisons with process-mining techniques may be approached. The case study applies process-mining techniques to administrative and clinical data for patients who present with chest pain symptoms at one of four public hospitals in South Australia. We demonstrate an approach that provides detailed insights into clinical (quality of patient health) and fiscal (hospital budget) pressures in the delivery of health care. We conclude by discussing the key lessons learned from our experience in conducting business process analysis and process mining based on the data from four different hospitals.
Publisher: Springer Berlin Heidelberg
Date: 2012
Publisher: Springer International Publishing
Date: 2022
Publisher: Springer Nature Switzerland
Date: 2023
Publisher: Springer Berlin Heidelberg
Date: 2008
Publisher: Springer International Publishing
Date: 2021
Publisher: Elsevier BV
Date: 05-2020
Publisher: Emerald
Date: 16-05-2022
DOI: 10.1108/BPMJ-03-2021-0170
Abstract: Understanding how organisations can institutionalise the outcomes of process improvement initiatives is limited. This paper explores how process changes resulting from improvement initiatives are adhered to, so that the changed processes become the new “norm” and people do not revert to old practices. This study proposes an institutionalisation process for process improvement initiatives. Firstly, a literature review identified Tolbert and Zucker’s (1996) institutionalisation framework as a suitable conceptual framework on which to base the enquiry. The second phase (the focus of this paper) applied the findings from two case studies to adapt this framework (its stages and related factors) to fit process improvement contexts. The paper presents an empirically and theoretically supported novel institutionalisation process for process improvement initiatives. The three stages of the institutionalisation process presented by Tolbert and Zucker (1996) have been respecified into four stages, explaining how process changes are institutionalised through “Planning”, “Implementation”, “Objectification” and “Sedimentation” (the original first stage, i.e. “Habitualisation” being ided into Planning and Implementation). Some newly identified Business Process Management (BPM) specific factors influencing the institutionalisation processes are also discussed and triangulated with the BPM literature. The study contributes to the BPM literature by conceptualising and theorising the stages of institutionalisation of process improvement initiatives. In doing so, the study explicitly identifies and considers several key contextual factors that drive the stages of institutionalisation. Practitioners can use this to better manage process change and future researchers can use this framework to operationalise institutionalisation of process change. This is the first research study that provides an empirically supported and clearly conceptualised understanding of the stages of institutionalising process improvement outcomes.
Publisher: Elsevier BV
Date: 03-2009
Publisher: Elsevier BV
Date: 06-2009
Publisher: Association for Computing Machinery (ACM)
Date: 23-03-2017
DOI: 10.1145/3041218
Abstract: In most business processes, several activities need to be executed by human resources and cannot be fully automated. To evaluate resource performance and identify best practices as well as opportunities for improvement, managers need objective information about resource behaviors. Companies often use information systems to support their processes, and these systems record information about process execution in event logs. We present a framework for analyzing and evaluating resource behavior through mining such event logs. The framework provides (1) a method for extracting descriptive information about resource skills, utilization, preferences, productivity, and collaboration patterns (2) a method for analyzing relationships between different resource behaviors and outcomes and (3) a method for evaluating the overall resource productivity, tracking its changes over time, and comparing it to the productivity of other resources. To demonstrate the applicability of our framework, we apply it to analyze employee behavior in an Australian company and evaluate its usefulness by a survey among industry managers.
Publisher: Springer International Publishing
Date: 2014
Publisher: Elsevier BV
Date: 06-2014
Publisher: Springer International Publishing
Date: 2015
Publisher: Springer Science and Business Media LLC
Date: 16-12-2015
Publisher: Springer International Publishing
Date: 2021
Publisher: Springer International Publishing
Date: 2015
Publisher: Elsevier BV
Date: 09-2011
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2021
Publisher: Emerald
Date: 04-06-2018
DOI: 10.1108/BPMJ-12-2016-0235
Abstract: Multidisciplinary business process management (BPM) research can reap significant impact. We can particularly benefit from incorporating accounting concepts to address some of the key BPM challenges, such as value-creation and return on investment of BPM activities. However, research which addresses a relationship between BPM and accounting is scarce. The purpose of this paper is to provide a detailed synthesis of the current literature that has integrated accounting aspects with BPM. The authors profile and thematically describe existing research, and derive evidence-based directions to guide future research. A multi-staged structured literature review approach to search for the two broad themes, accounting and BPM, supported by NVivo (to manage the papers and the coding and analysis processes) was designed and followed. The paper confirms the dearth of work that ties the two disciplines, despite the synergetic multidisciplinary results that can be attained. Available literature is mostly from the management accounting perspective and relates to describing how performance management, in particular performance measurement, can be applicable to process improvement initiatives together with tools such as activity-based costing and the balanced scorecard. There is a lack of research that examines BPM in relation to any financial accounting perspectives (such as external reporting). Future research directions are proposed together with implications for practitioners with the findings of this structured literature review. The paper provides a detailed synthesis of the existing literature on the nexus between accounting and BPM. It summarizes the implications for practitioners and provides directions for future research by identifying key gaps and opportunities with a sound contextual basis for extension and new work. Effective literature reviews create strong foundations for future research and accumulate the otherwise scattered knowledge into a single place. This is the first structured literature review that provides a detailed synthesis of the research that ties together the accounting and BPM disciplines, providing a basis for future research directions together with implications for practitioners.
Publisher: Springer Nature Switzerland
Date: 2023
Publisher: Emerald
Date: 28-02-2023
DOI: 10.1108/BPMJ-09-2022-0453
Abstract: Process mining (PM) specialises in extracting insights from event logs to facilitate the improvement of an organisation's business processes. Industry trends show the proliferation and continued growth of PM techniques. To address the minimal attention given to developing empirically supported frameworks to assess the nature of impact in the PM domain, this study proposes a framework that identifies the key categories of PM impacts and their interrelationships. The qualitatively derived framework is built, re-specified and validated from a erse collection of 62 PM case reports. With multiple rounds of coding supported by coder corroborations, inductively extracted concepts relating to impact from a first set of 12 case reports were grouped into themes and sub-themes to derive an a-priori framework by adopting the balanced scorecard as a theoretical lens. Concepts from the remaining 50 case reports were deductively grouped to re-specify and validate the proposed PM impacts framework. Further analysis identified interrelationships between impacts, which extends our understanding of the identified PM impacts. The proposed framework captures PM impacts in four main categories: (a) impact on the process, (b) customer impact, (c) financial impact, and (d) impact on innovation and learning. The authors extended this analysis to identify the interrelationships between these categories, which vividly demonstrates how impact on the process mediates the attainment of the other three impact types. The need for a deeper understanding of PM impacts within the context of contemporary PM practice is addressed by this work. The PM impacts framework provides a classification of PM impacts into four categories with 19 subcategories. It also identifies direct, moderating and mediating relationships between categories and subcategories whilst highlighting the role of impact on the process as a precursor to the other types of PM impact.
Publisher: Springer Nature Switzerland
Date: 2023
Publisher: Elsevier BV
Date: 03-2017
Publisher: Springer International Publishing
Date: 2014
Publisher: Elsevier BV
Date: 07-2016
Publisher: IEEE
Date: 07-2014
Publisher: Springer Nature Switzerland
Date: 2023
Publisher: IEEE
Date: 05-2015
Publisher: Springer International Publishing
Date: 2020
Publisher: Springer International Publishing
Date: 2021
Publisher: JMIR Publications Inc.
Date: 12-09-2022
Abstract: he promise of digital health is principally dependent on the ability to electronically capture data that can be analyzed to improve decision-making. However, the ability to effectively harness data has proven elusive, largely because of the quality of the data captured. Despite the importance of data quality (DQ), an agreed-upon DQ taxonomy evades literature. When consolidated frameworks are developed, the dimensions are often fragmented, without consideration of the interrelationships among the dimensions or their resultant impact. he aim of this study was to develop a consolidated digital health DQ dimension and outcome (DQ-DO) framework to provide insights into 3 research questions: What are the dimensions of digital health DQ? How are the dimensions of digital health DQ related? and What are the impacts of digital health DQ? ollowing the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines, a developmental systematic literature review was conducted of peer-reviewed literature focusing on digital health DQ in predominately hospital settings. A total of 227 relevant articles were retrieved and inductively analyzed to identify digital health DQ dimensions and outcomes. The inductive analysis was performed through open coding, constant comparison, and card sorting with subject matter experts to identify digital health DQ dimensions and digital health DQ outcomes. Subsequently, a computer-assisted analysis was performed and verified by DQ experts to identify the interrelationships among the DQ dimensions and relationships between DQ dimensions and outcomes. The analysis resulted in the development of the DQ-DO framework. he digital health DQ-DO framework consists of 6 dimensions of DQ, namely accessibility, accuracy, completeness, consistency, contextual validity, and currency interrelationships among the dimensions of digital health DQ, with consistency being the most influential dimension impacting all other digital health DQ dimensions 5 digital health DQ outcomes, namely clinical, clinician, research-related, business process, and organizational outcomes and relationships between the digital health DQ dimensions and DQ outcomes, with the consistency and accessibility dimensions impacting all DQ outcomes. he DQ-DO framework developed in this study demonstrates the complexity of digital health DQ and the necessity for reducing digital health DQ issues. The framework further provides health care executives with holistic insights into DQ issues and resultant outcomes, which can help them prioritize which DQ-related problems to tackle first.
Publisher: Springer International Publishing
Date: 2017
Publisher: Springer International Publishing
Date: 2014
Publisher: Springer Berlin Heidelberg
Date: 2013
Publisher: Emerald
Date: 06-02-2009
DOI: 10.1108/14637150910931479
Abstract: The purpose of this paper is to demonstrate that process verification has matured to a level where it can be used in practice. This paper reports on new verification techniques that can be used to assess the correctness of real‐life models. The proposed approach relies on using formal methods to determine the correctness of business processes with cancellation and OR‐joins. The paper also demonstrates how reduction rules can be used to improve the efficiency. These techniques are presented in the context of the workflow language yet another workflow language (YAWL) that provides direct support for 20 most frequently used patterns found today (including cancellation and OR‐joins). But the results also apply to other languages with these features (e.g. BPMN, EPCs, UML activity diagrams, etc.). An editor has been developed that provides diagnostic information based on the techniques presented in this paper. The paper proposes four properties for business processes with cancellation and OR‐joins, namely: soundness, weak soundness, irreducible cancellation regions and immutable OR‐joins and develop new techniques to verify these properties. Reduction rules have been used as a means of improving the efficiency of the algorithm. The paper demonstrates the feasibility of this verification approach using a realistic and complex business process, the visa application process for general skilled migration to Australia, modelled as a YAWL workflow with cancellation regions and OR‐joins. Business processes sometimes require complex execution interdependencies to properly complete a process. For instance, it is possible that certain activities need to be cancelled mid‐way though the process. Some parallel activities may require complex “wait and see” style synchronisation depending on a given context. These types of business processes can be found in various domains, such as application integration, B2B commerce, web service composition and workflow systems. Even though cancellation and sophisticated join structures are present in many business processes, existing verification techniques are unable to deal with such processes. Hence, this paper plays an important role in making process verification a reality.
Publisher: JMIR Publications Inc.
Date: 31-03-2023
DOI: 10.2196/42615
Abstract: The promise of digital health is principally dependent on the ability to electronically capture data that can be analyzed to improve decision-making. However, the ability to effectively harness data has proven elusive, largely because of the quality of the data captured. Despite the importance of data quality (DQ), an agreed-upon DQ taxonomy evades literature. When consolidated frameworks are developed, the dimensions are often fragmented, without consideration of the interrelationships among the dimensions or their resultant impact. The aim of this study was to develop a consolidated digital health DQ dimension and outcome (DQ-DO) framework to provide insights into 3 research questions: What are the dimensions of digital health DQ? How are the dimensions of digital health DQ related? and What are the impacts of digital health DQ? Following the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines, a developmental systematic literature review was conducted of peer-reviewed literature focusing on digital health DQ in predominately hospital settings. A total of 227 relevant articles were retrieved and inductively analyzed to identify digital health DQ dimensions and outcomes. The inductive analysis was performed through open coding, constant comparison, and card sorting with subject matter experts to identify digital health DQ dimensions and digital health DQ outcomes. Subsequently, a computer-assisted analysis was performed and verified by DQ experts to identify the interrelationships among the DQ dimensions and relationships between DQ dimensions and outcomes. The analysis resulted in the development of the DQ-DO framework. The digital health DQ-DO framework consists of 6 dimensions of DQ, namely accessibility, accuracy, completeness, consistency, contextual validity, and currency interrelationships among the dimensions of digital health DQ, with consistency being the most influential dimension impacting all other digital health DQ dimensions 5 digital health DQ outcomes, namely clinical, clinician, research-related, business process, and organizational outcomes and relationships between the digital health DQ dimensions and DQ outcomes, with the consistency and accessibility dimensions impacting all DQ outcomes. The DQ-DO framework developed in this study demonstrates the complexity of digital health DQ and the necessity for reducing digital health DQ issues. The framework further provides health care executives with holistic insights into DQ issues and resultant outcomes, which can help them prioritize which DQ-related problems to tackle first.
Publisher: World Scientific Pub Co Pte Lt
Date: 03-2009
DOI: 10.1142/S0218843009002002
Abstract: Workflow languages offer constructs for coordinating tasks. Among these constructs are various types of splits and joins. One type of join, which shows up in various incarnations, is the OR-join. Different approaches assign a different (often only intuitive) semantics to this type of join, though they do share the common theme that branches that cannot complete will not be waited for. Many systems and languages struggle with the semantics and implementation of the OR-join because its non-local semantics require a synchronization depending on the analysis of future execution paths. The presence of cancelation features, potentially unbounded behavior, and other OR-joins in a workflow further complicates the formal semantics of the OR-join. In this paper, the concept of the OR-join is examined in detail in the context of the workflow language YAWL, a powerful workflow language designed to support a collection of workflow patterns and inspired by Petri nets. The paper provides a suitable (non-local) semantics for an OR-join and gives a concrete algorithm with two optimization techniques to support the implementation. This approach exploits a link that is proposed between YAWL and reset nets, a variant of Petri nets with a special type of arc that can remove all tokens from a place when its transition fires. Through the behavior of reset arcs, the behavior of cancelation regions can be captured in a natural manner.
Publisher: Association for Computing Machinery (ACM)
Date: 05-04-2022
DOI: 10.1145/3511707
Abstract: Real-life event logs, reflecting the actual executions of complex business processes, are faced with numerous data quality issues. Extensive data sanity checks and pre-processing are usually needed before historical data can be used as input to obtain reliable data-driven insights. However, most of the existing algorithms in process mining, a field focusing on data-driven process analysis, do not take any data quality issues or the potential effects of data pre-processing into account explicitly. This can result in erroneous process mining results, leading to inaccurate, or misleading conclusions about the process under investigation. To address this gap, we propose data quality annotations for event logs, which can be used by process mining algorithms to generate quality-informed insights. Using a design science approach, requirements are formulated, which are leveraged to propose data quality annotations. Moreover, we present the “Quality-Informed visual Miner” plug-in to demonstrate the potential utility and impact of data quality annotations. Our experimental results, utilising both synthetic and real-life event logs, show how the use of data quality annotations by process mining techniques can assist in increasing the reliability of performance analysis results.
Publisher: MDPI AG
Date: 14-05-2020
Abstract: In this paper we report on key findings and lessons from a process mining case study conducted to analyse transport pathways discovered across the time-critical phase of pre-hospital care for persons involved in road traffic crashes in Queensland (Australia). In this study, a case is defined as being an in idual patient’s journey from roadside to definitive care. We describe challenges in constructing an event log from source data provided by emergency services and hospitals, including record linkage (no standard patient identifier), and constructing a unified view of response, retrieval, transport and pre-hospital care from interleaving processes of the in idual service providers. We analyse three separate cohorts of patients according to their degree of interaction with Queensland Health’s hospital system (C1: no transport required, C2: transported but no Queensland Health hospital, C3: transported and hospitalisation). Variant analysis and subsequent process modelling show high levels of variance in each cohort resulting from a combination of data collection, data linkage and actual differences in process execution. For Cohort 3, automated process modelling generated ’spaghetti’ models. Expert-guided editing resulted in readable models with acceptable fitness, which were used for process analysis. We also conduct a comparative performance analysis of transport segment based on hospital ‘remoteness’. With regard to the field of process mining, we reach various conclusions including (i) in a complex domain, the current crop of automated process algorithms do not generate readable models, however, (ii) such models provide a starting point for expert-guided editing of models (where the tool allows) which can yield models that have acceptable quality and are readable by domain experts, (iii) process improvement opportunities were largely suggested by domain experts (after reviewing analysis results) rather than being directly derived by process mining tools, meaning that the field needs to become more prescriptive (automated derivation of improvement opportunities).
Publisher: Springer Science and Business Media LLC
Date: 24-05-2016
Publisher: Springer Berlin Heidelberg
Date: 11-04-2014
Publisher: Springer Berlin Heidelberg
Date: 2013
Publisher: Elsevier BV
Date: 08-2017
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2021
Publisher: Elsevier BV
Date: 06-2018
Publisher: Springer International Publishing
Date: 2017
Publisher: Springer International Publishing
Date: 2014
Publisher: Association for Computing Machinery (ACM)
Date: 12-10-2016
DOI: 10.1145/2980764
Abstract: The abundance of event data in today’s information systems makes it possible to “confront” process models with the actual observed behavior. Process mining techniques use event logs to discover process models that describe the observed behavior, and to check conformance of process models by diagnosing deviations between models and reality. In many situations, it is desirable to mediate between a preexisting model and observed behavior. Hence, we would like to repair the model while improving the correspondence between model and log as much as possible. The approach presented in this article assigns predefined costs to repair actions (allowing inserting or skipping of activities). Given a maximum degree of change, we search for models that are optimal in terms of fitness—that is, the fraction of behavior in the log not possible according to the model is minimized. To compute fitness, we need to align the model and log, which can be time consuming. Hence, finding an optimal repair may be intractable. We propose different alternative approaches to speed up repair. The number of alignment computations can be reduced dramatically while still returning near-optimal repairs. The different approaches have been implemented using the process mining framework ProM and evaluated using real-life logs.
Publisher: Springer International Publishing
Date: 2022
DOI: 10.1007/978-3-030-98581-3_7
Abstract: Process mining facilitates analysis of business processes using event logs derived from historical records of process executions stored in organisations’ information systems. Most existing process mining techniques only consider data directly related to process execution (endogenous data). Data not directly representable as attributes of either events or traces (which includes exogenous data), are generally not considered. Exogenous data may be used by process participants in making decisions about execution paths. However, as exogenous data is not represented in event logs, its impact on such decision making is opaque and cannot currently be assessed by existing process mining techniques. This paper shows how exogenous data can be used in process mining, in particular discovery and enhancement techniques, to understand its influence on process decisions. In particular, we focus on time series which represent periodic observations of e.g. weather measurements, city health alerts or patient vital signs. We show that exogenous time series can be aligned and transformed into new attributes to annotate events in an event log. Then, we use these attributes to discover preconditions in a Petri net with exogenous data (xDPN), thus revealing the exogenous data’s influence on the process. Using our framework and a real-life data set from the medical domain, we evaluate the influence of exogenous data on decision points that are non-deterministic in an xDPN.
Publisher: Elsevier BV
Date: 08-2017
Publisher: Springer International Publishing
Date: 2018
Publisher: Association for Computing Machinery (ACM)
Date: 28-09-2023
DOI: 10.1145/3613247
Abstract: Since its emergence over two decades ago, process mining has flourished as a discipline, with numerous contributions to its theory, widespread practical applications, and mature support by commercial tooling environments. However, its potential for significant organisational impact is h ered by poor quality event data. Process mining starts with the acquisition and preparation of event data coming from different data sources. These are then transformed into event logs, consisting of process execution traces including multiple events. In real-life scenarios, event logs suffer from significant data quality problems, which must be recognised and effectively resolved for obtaining meaningful insights from process mining analysis. Despite its importance, the topic of data quality in process mining has received limited attention. In this paper, we discuss the emerging challenges related to process-data quality from both a research and practical point of view. Additionally, we present a corresponding research agenda with key research directions.
Publisher: Springer Berlin Heidelberg
Date: 2013
Publisher: Elsevier BV
Date: 03-2010
Publisher: Springer Berlin Heidelberg
Date: 2009
Publisher: Elsevier BV
Date: 09-2009
Publisher: Elsevier BV
Date: 11-2022
DOI: 10.1016/J.ARTMED.2022.102409
Abstract: Process mining is a well-established discipline with applications in many industry sectors, including healthcare. To date, few publications have considered the context in which processes execute. Little consideration has been given as to how contextual data (exogenous data) can be practically included for process mining analysis, beyond including case or event attributes in a typical event log. We show that the combination of process data (endogenous) and exogenous data can generate insights not possible with standard process mining techniques. Our contributions are a framework for process mining with exogenous data and new analyses, where exogenous data and process behaviour are linked to process outcomes. Our new analyses visualise exogenous data, highlighting the trends and variations, to show where overlaps or distinctions exist between outcomes. We applied our analyses in a healthcare setting and show that clinicians could extract insights about differences in patients' vital signs (exogenous data) relevant to clinical outcomes. We present two evaluations, using a publicly available data set, MIMIC-III, to demonstrate the applicability of our analysis. These evaluations show that process mining can integrate large amounts of physiologic data and interventions, with resulting discrimination and conversion to clinically interpretable information.
Start Date: 05-2020
End Date: 12-2023
Amount: $417,440.00
Funder: Australian Research Council
View Funded ActivityStart Date: 2015
End Date: 12-2019
Amount: $847,700.00
Funder: Australian Research Council
View Funded ActivityStart Date: 2012
End Date: 12-2016
Amount: $320,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 2011
End Date: 12-2015
Amount: $450,000.00
Funder: Australian Research Council
View Funded Activity