ORCID Profile
0000-0002-6444-6584
Current Organisation
Macquarie University
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
In Research Link Australia (RLA), "Research Topics" refer to ANZSRC FOR and SEO codes. These topics are either sourced from ANZSRC FOR and SEO codes listed in researchers' related grants or generated by a large language model (LLM) based on their publications.
Health Information Systems (Incl. Surveillance) | Decision Support And Group Support Systems | Simulation And Modelling | Information Systems | Other Artificial Intelligence | Information Systems Development Methodologies | Systems Theory | Public Health and Health Services | Artificial Intelligence and Image Processing | Infectious Diseases | Text Processing | Computer-Human Interaction | Business and Management | Human Bioethics | Public Policy | Sociology | Social Change | Innovation And Technology Management
Health and support services not elsewhere classified | Information processing services | Information services not elsewhere classified | Public health not elsewhere classified | Public services management | Application tools and system utilities | The professions and professionalisation | Management and productivity issues not elsewhere classified | Changing work patterns | Infectious diseases | Evaluation of health outcomes | Computer software and services not elsewhere classified |
Publisher: Oxford University Press (OUP)
Date: 13-02-2017
DOI: 10.1093/JAMIA/OCW162
Publisher: JMIR Publications Inc.
Date: 22-01-2008
DOI: 10.2196/JMIR.963
Publisher: JMIR Publications Inc.
Date: 17-03-2019
Abstract: ools used to appraise the credibility of health information are time-consuming to apply and require context-specific expertise, limiting their use for quickly identifying and mitigating the spread of misinformation as it emerges. he aim of this study was to estimate the proportion of vaccine-related Twitter posts linked to Web pages of low credibility and measure the potential reach of those posts. ling from 143,003 unique vaccine-related Web pages shared on Twitter between January 2017 and March 2018, we used a 7-point checklist adapted from validated tools and guidelines to manually appraise the credibility of 474 Web pages. These were used to train several classifiers (random forests, support vector machines, and recurrent neural networks) using the text from a Web page to predict whether the information satisfies each of the 7 criteria. Estimating the credibility of all other Web pages, we used the follower network to estimate potential exposures relative to a credibility score defined by the 7-point checklist. he best-performing classifiers were able to distinguish between low, medium, and high credibility with an accuracy of 78% and labeled low-credibility Web pages with a precision of over 96%. Across the set of unique Web pages, 11.86% (16,961 of 143,003) were estimated as low credibility and they generated 9.34% (1.64 billion of 17.6 billion) of potential exposures. The 100 most popular links to low credibility Web pages were each potentially seen by an estimated 2 million to 80 million Twitter users globally. he results indicate that although a small minority of low-credibility Web pages reach a large audience, low-credibility Web pages tend to reach fewer users than other Web pages overall and are more commonly shared within certain subpopulations. An automatic credibility appraisal tool may be useful for finding communities of users at higher risk of exposure to low-credibility vaccine communications.
Publisher: Humana Press
Date: 2008
DOI: 10.1007/978-1-60327-148-6_18
Abstract: There is a growing demand for tools to support clinicians utilize genomic results generated by molecular diagnostic and cytogenetic methods in support of their decision-making. This chapter reviews existing experience and methods for the design, implementation and evaluation of clinical bioinformatics electronic decision support systems (EDSS). It provides a roadmap for identifying decision tasks for automation and selecting optimal tools for building task-specific systems. Key success factors for EDSS implementation and evaluation are also outlined.
Publisher: BMJ
Date: 19-06-2015
Publisher: Springer New York
Date: 06-06-2014
Publisher: Routledge
Date: 04-12-2003
Publisher: Oxford University Press (OUP)
Date: 28-08-2009
DOI: 10.1197/JAMIA.M3023
Publisher: JMIR Publications Inc.
Date: 12-09-2018
Abstract: echnological interventions such as mobile apps, Web-based social networks, and wearable trackers have the potential to influence physical activity yet, only a few studies have examined the efficacy of an intervention bundle combining these different technologies. his study aimed to pilot test an intervention composed of a social networking mobile app, connected with a wearable tracker, and investigate its efficacy in improving physical activity, as well as explore participant engagement and the usability of the app. his was a pre-post quasi-experimental study with 1 arm, where participants were subjected to the intervention for a 6-month period. The primary outcome measure was the difference in daily step count between baseline and 6 months. Secondary outcome measures included engagement with the intervention and system usability. Descriptive and inferential statistical tests were conducted posthoc subgroup analyses were carried out for participants with different levels of steps at baseline, app usage, and social features usage. total of 55 participants were enrolled in the study the mean age was 23.6 years and 28 (51%) were female. There was a nonstatistically significant increase in the average daily step count between baseline and 6 months (mean change=14.5 steps/day, P=.98, 95% CI –1136.5 to 1107.5). Subgroup analysis comparing the higher and lower physical activity groups at baseline showed that the latter had a statistically significantly higher increase in their daily step count (group difference in mean change from baseline to 6 months=3025 steps per day, P=.008, 95% CI 837.9-5211.8). At 6 months, the retention rate was 82% (45/55) app usage decreased over time. The mean system usability score was 60.1 (SD 19.2). his study showed the preliminary efficacy of a mobile social networking intervention, integrated with a wearable tracker to promote physical activity, particularly for less physically active subgroups of the population. Future research should explore how to address challenges faced by physically inactive people to provide tailored advices. In addition, users’ perspectives should be explored to shed light on factors that might influence their engagement with the intervention.
Publisher: Elsevier BV
Date: 12-2017
DOI: 10.1016/J.VACCINE.2017.04.060
Abstract: Together with access, acceptance of vaccines affects human papillomavirus (HPV) vaccine coverage, yet little is known about media's role. Our aim was to determine whether measures of information exposure derived from Twitter could be used to explain differences in coverage in the United States. We conducted an analysis of exposure to information about HPV vaccines on Twitter, derived from 273.8 million exposures to 258,418 tweets posted between 1 October 2013 and 30 October 2015. Tweets were classified by topic using machine learning methods. Proportional exposure to each topic was used to construct multivariable models for predicting state-level HPV vaccine coverage, and compared to multivariable models constructed using socioeconomic factors: poverty, education, and insurance. Outcome measures included correlations between coverage and the in idual topics and socioeconomic factors and differences in the predictive performance of the multivariable models. Topics corresponding to media controversies were most closely correlated with coverage (both positively and negatively) education and insurance were highest among socioeconomic indicators. Measures of information exposure explained 68% of the variance in one dose 2015 HPV vaccine coverage in females (males: 63%). In comparison, models based on socioeconomic factors explained 42% of the variance in females (males: 40%). Measures of information exposure derived from Twitter explained differences in coverage that were not explained by socioeconomic factors. Vaccine coverage was lower in states where safety concerns, misinformation, and conspiracies made up higher proportions of exposures, suggesting that negative representations of vaccines in the media may reflect or influence vaccine acceptance.
Publisher: JMIR Publications Inc.
Date: 30-06-2010
DOI: 10.2196/JMIR.1396
Publisher: Elsevier BV
Date: 07-2002
DOI: 10.1016/S1386-6532(00)00182-7
Abstract: neuraminidase (NA) inhibitors have recently become available for treatment of influenza. Rapid antigen detection assays at 'point-of-care' may improve the accuracy of clinical diagnosis, but the value of these techniques in assisting with the appropriate use of antivirals remains controversial. to compare the diagnostic utilities of two management strategies for influenza, empirical antiviral therapy versus therapy based on a positive rapid test result in pre-epidemic and epidemic periods. a threshold decision analytic model was designed to compare these competing strategies and sensitivity analysis performed to examine the impact of diagnostic variables on the expected utility of the decision with a range of prior probabilities of infection between 1 and 50%. on the basis of the calculated sensitivity (77%) and specificity (95%) of a point-of-care test for influenza, pre-treatment testing was preferred and cost-effective in non-epidemic stage of the influenza cycle. The alternative strategy of empirical treatment produces a higher utility value during epidemics, but may result in overuse of antivirals for low-risk populations. The two strategies had equivalent efficacy when the probability of influenza was 42%. Patients with flu-like illness, who present outside the influenza outbreak and are considered to be at low risk for influenza-related complications, should be tested to confirm the diagnosis before starting antiviral treatment with a NA inhibitor. The most important variables in the model were the accuracy of the clinical diagnosis and the pre-test probability of influenza. A threshold probability of influenza of 42% would dictate changing from the rapid testing strategy to a 'treat regardless' strategy.
Publisher: JMIR Publications Inc.
Date: 22-04-2014
DOI: 10.2196/JMIR.3331
Publisher: BMJ
Date: 07-2017
Publisher: SAGE Publications
Date: 05-2021
Abstract: Massive transfusions guided by massive transfusion protocols are commonly used to manage critical bleeding, when the patient is at significant risk of morbidity and mortality, and multiple timely decisions must be made by clinicians. Clinical decision support systems are increasingly used to provide patient-specific recommendations by comparing patient information to a knowledge base, and have been shown to improve patient outcomes. To investigate current massive transfusion practice and the experiences and attitudes of anaesthetists towards massive transfusion and clinical decision support systems, we anonymously surveyed 1000 anaesthetists and anaesthesia trainees across Australia and New Zealand. A total of 228 surveys (23.6%) were successfully completed and 227 were analysed for a 23.3% response rate. Most respondents were involved in massive transfusions infrequently (88.1% managed five or fewer massive transfusion protocols per year) and worked at hospitals which have massive transfusion protocols (89.4%). Massive transfusion management was predominantly limited by timely access to point-of-care coagulation assessment and by competition with other tasks, with trainees reporting more significant limitations compared to specialists. The majority of respondents reported that they were likely, or very likely, both to use (73.1%) and to trust (85%) a clinical decision support system for massive transfusions, with no significant difference between anaesthesia trainees and specialists ( P = 0.375 and P = 0.73, respectively). While the response rate to our survey was poor, there was still a wide range of massive transfusion experience among respondents, with multiple subjective factors identified limiting massive transfusion practice. We identified several potential design features and barriers to implementation to assist with the future development of a clinical decision support system for massive transfusion, and overall wide support for a clinical decision support system for massive transfusion among respondents.
Publisher: JMIR Publications Inc.
Date: 29-08-2016
DOI: 10.2196/JMIR.6045
Abstract: In public health surveillance, measuring how information enters and spreads through online communities may help us understand geographical variation in decision making associated with poor health outcomes. Our aim was to evaluate the use of community structure and topic modeling methods as a process for characterizing the clustering of opinions about human papillomavirus (HPV) vaccines on Twitter. The study examined Twitter posts (tweets) collected between October 2013 and October 2015 about HPV vaccines. We tested Latent Dirichlet Allocation and Dirichlet Multinomial Mixture (DMM) models for inferring topics associated with tweets, and community agglomeration (Louvain) and the encoding of random walks (Infomap) methods to detect community structure of the users from their social connections. We examined the alignment between community structure and topics using several common clustering alignment measures and introduced a statistical measure of alignment based on the concentration of specific topics within a small number of communities. Visualizations of the topics and the alignment between topics and communities are presented to support the interpretation of the results in context of public health communication and identification of communities at risk of rejecting the safety and efficacy of HPV vaccines. We analyzed 285,417 Twitter posts (tweets) about HPV vaccines from 101,519 users connected by 4,387,524 social connections. Examining the alignment between the community structure and the topics of tweets, the results indicated that the Louvain community detection algorithm together with DMM produced consistently higher alignment values and that alignments were generally higher when the number of topics was lower. After applying the Louvain method and DMM with 30 topics and grouping semantically similar topics in a hierarchy, we characterized 163,148 (57.16%) tweets as evidence and advocacy, and 6244 (2.19%) tweets describing personal experiences. Among the 4548 users who posted experiential tweets, 3449 users (75.84%) were found in communities where the majority of tweets were about evidence and advocacy. The use of community detection in concert with topic modeling appears to be a useful way to characterize Twitter communities for the purpose of opinion surveillance in public health applications. Our approach may help identify online communities at risk of being influenced by negative opinions about public health interventions such as HPV vaccines.
Publisher: Springer Science and Business Media LLC
Date: 22-05-2017
Publisher: BCS Learning & Development
Date: 07-2018
Publisher: Elsevier BV
Date: 05-2012
DOI: 10.1016/J.JCLINEPI.2011.10.010
Abstract: To measure the relative influence that industry authors have on collaborative research communities and evidence production. Using 22 commonly prescribed drugs, 6,711 randomized controlled trials (RCTs), and 28,104 authors, 22 collaboration networks were constructed and analyzed. The directly industry-affiliated (DIA) authors were identified in the networks according to their published affiliations. Measures of influence (network centrality) and impact (citations) were determined for every author. Network-level measures of community structure and collaborative preference were used to further characterize the groups. Six percent (1,741 of 28,104) of authors listed a direct affiliation with the manufacturer of a drug evaluated in the RCT. These authors received significantly more citations (P<0.05 in 19 networks) and were significantly more central in the networks (P<0.05 in 20 networks). The networks show that DIA authors tend to have greater reach in the networks and collaborate more often with non-DIA authors despite a preference toward their own group. Potential confounders include publication bias, trial sizes, and conclusions. Industry-based authors are more central in their networks and are deeply embedded within highly connected drug research communities. As a consequence, they have the potential to influence information flow in the production of evidence.
Publisher: Routledge
Date: 04-12-2003
Publisher: Elsevier BV
Date: 05-2018
DOI: 10.1016/J.IJMEDINF.2018.02.011
Abstract: To conduct a usability study exploring the value of using speech recognition (SR) for clinical documentation tasks within an electronic health record (EHR) system. Thirty-five emergency department clinicians completed a system usability scale (SUS) questionnaire. The study was undertaken after participants undertook randomly allocated clinical documentation tasks using keyboard and mouse (KBM) or SR. SUS scores were analyzed and the results with KBM were compared to SR results. Significant difference in SUS scores between EHR system use with and without SR were observed (KBM 67, SR 61 P = 0.045 CI, 0.1 to 12.0). Nineteen of 35 participants scored higher for EHR with KBM, 11 higher for EHR with SR and 5 gave the same score for both. Factor analysis showed no significant difference in scores for the sub-element of usability (EHR with KBM 65, EHR with SR 62 P = 0.255 CI, -2.6 to 9.5). Scores for the sub-element of learnability were significantly different (KBM 72, SR 55 P < 0.001 CI, 9.8 to 23.5). A significant correlation was found between the perceived usability of the two system configurations (EHR with KBM or SR) and the efficiency of documentation (time to document) (P = 0.002 CI, 10.5 to -0.1) but not with safety (number of errors) (P = 0.90 CI, -2.3 to 2.6). SR was associated with significantly reduced overall usability scores, even though it is often positioned as ease of use technology. SR was perceived to impose larger costs in terms of learnability via training and support requirements for EHR based documentation when compared to using KBM. Lower usability scores were significantly associated with longer documentation times. The usability of EHR systems with any input modality is an area that requires continued development. The addition of an SR component to an EHR system may cause a significant reduction in terms of perceived usability by clinicians.
Publisher: MDPI AG
Date: 15-11-2021
DOI: 10.3390/INFO12110471
Abstract: Automatic severity assessment and progression prediction can facilitate admission, triage, and referral of COVID-19 patients. This study aims to explore the potential use of lung lesion features in the management of COVID-19, based on the assumption that lesion features may carry important diagnostic and prognostic information for quantifying infection severity and forecasting disease progression. A novel LesionEncoder framework is proposed to detect lesions in chest CT scans and to encode lesion features for automatic severity assessment and progression prediction. The LesionEncoder framework consists of a U-Net module for detecting lesions and extracting features from in idual CT slices, and a recurrent neural network (RNN) module for learning the relationship between feature vectors and collectively classifying the sequence of feature vectors. Chest CT scans of two cohorts of COVID-19 patients from two hospitals in China were used for training and testing the proposed framework. When applied to assessing severity, this framework outperformed baseline methods achieving a sensitivity of 0.818, specificity of 0.952, accuracy of 0.940, and AUC of 0.903. It also outperformed the other tested methods in disease progression prediction with a sensitivity of 0.667, specificity of 0.838, accuracy of 0.829, and AUC of 0.736. The LesionEncoder framework demonstrates a strong potential for clinical application in current COVID-19 management, particularly in automatic severity assessment of COVID-19 patients. This framework also has a potential for other lesion-focused medical image analyses.
Publisher: SAGE Publications
Date: 03-2005
Abstract: Objective. To examine the impact of online evidence retrieval on clinicians’ decision-making confidence and to determine if this differs for experienced doctors and nurses. Methods. A s le of 44 doctors and 31 clinical nurse consultants (CNCs) answered 8 clinical scenarios (600 scenario answers) before and after the use of online evidence resources. Clinicians rated their confidence in scenario answers and in the evidence they found using the information system. Results. Prior to using online evidence, 37% of doctors and 18% of CNCs answered the scenarios correctly. These clinicians were more confident (56% very confident or confident) in their answers than those with incorrect (34%) answers. Doctors with incorrect answers prior to searching rated their confidence significantly higher than did nurses who were incorrect. After searching, both groups answered 50% of scenarios correctly. Clinicians with correct answers had greater confidence in the evidence found compared to those with incorrect answers. Doctors were more confident in evidence found confirming an initially correct answer than were nurses. More than 50% of clinicians who persisted with an incorrect answer after searching reported that they were confident or very confident in the evidence found. Clinicians who did not know scenario answers before searching placed equal confidence in evidence that led them to a correct or incorrect answer. Conclusions. The information obtained from an online evidence system influenced clinicians’ confidence in their answers to the clinical scenarios. The relationship between confidence in answers and correctness is complex. Both existing knowledge and professional role were mediating factors. The finding that many clinicians placed confidence in information that led them to incorrect answers warrants further investigation.
Publisher: JMIR Publications Inc.
Date: 11-10-2019
Abstract: aving patients self-manage their health conditions is a widely promoted concept, but many patients struggle to practice it effectively. Moreover, few studies have analyzed the nature of work required from patients and how such work fits into the context of their daily life. his study aimed to review the characteristics of patient work in adult patients. Patient work refers to tasks that health conditions impose on patients (eg, taking medications) within a system of contextual factors. systematic scoping review was conducted using narrative synthesis. Data were extracted from PubMed, Excerpta Medica database (EMBASE), Cumulative Index to Nursing and Allied Health Literature (CINAHL), and PsycINFO, including studies from August 2013 to August 2018. The included studies focused on adult patients and assessed one or more of the following: (1) physical health–related tasks, (2) cognitive health–related tasks, or (3) contextual factors affecting these tasks. Tasks were categorized according to the themes that emerged: (1) if the task is always visible to others or can be cognitive, (2) if the task must be conducted collaboratively or can be conducted alone, and (3) if the task was done with the purpose of creating resources. Contextual factors were grouped according to the level at which they exert influence (micro, meso, or macro) and where they fit in the patient work system (the macroergonomic layer of physical, social, and organizational factors the mesoergonomic layer of household and community and the microergonomic triad of person-task-tools). n total, 67 publications were included, with 58 original research articles and 9 review articles. A variety of patient work tasks were observed, ranging from physical and tangible tasks (such as taking medications and visiting health care professionals) to psychological and social tasks (such as creating coping strategies). Patient work was affected by a range of contextual factors on the micro, meso, or macro levels. Our results indicate that most patient work was done alone, in private, and often imposing cognitive burden with low amounts of support. his review sought to provide insight into the work burden of health management from a patient perspective and how patient context influences such work. For many patients, health-related work is ever present, invisible, and overwhelming. When researchers and clinicians design and implement patient-facing interventions, it is important to understand how the extra work impacts one’s internal state and coping strategy, how such work fits into daily routines, and if these changes could be maintained in the long term.
Publisher: BMJ
Date: 10-01-2013
DOI: 10.1136/BMJ.F139
Publisher: Oxford University Press (OUP)
Date: 18-06-2020
Abstract: The study sought to evaluate the feasibility of using Unified Medical Language System (UMLS) semantic features for automated identification of reports about patient safety incidents by type and severity. Binary support vector machine (SVM) classifier ensembles were trained and validated using balanced datasets of critical incident report texts (n_type = 2860, n_severity = 1160) collected from a state-wide reporting system. Generalizability was evaluated on different and independent hospital-level reporting system. Concepts were extracted from report narratives using the UMLS Metathesaurus, and their relevance and frequency were used as semantic features. Performance was evaluated by F-score, Hamming loss, and exact match score and was compared with SVM ensembles using bag-of-words (BOW) features on 3 testing datasets (type/severity: n_benchmark = 286/116, n_original = 444/4837, n_independent =6000/5950). SVMs using semantic features met or outperformed those based on BOW features to identify 10 different incident types (F-score [semantics/BOW]: benchmark = 82.6%/69.4% original = 77.9%/68.8% independent = 78.0%/67.4%) and extreme-risk events (F-score [semantics/BOW]: benchmark = 87.3%/87.3% original = 25.5%/19.8% independent = 49.6%/52.7%). For incident type, the exact match score for semantic classifiers was consistently higher than BOW across all test datasets (exact match [semantics/BOW]: benchmark = 48.9%/39.9% original = 57.9%/44.4% independent = 59.5%/34.9%). BOW representations are not ideal for the automated identification of incident reports because they do not account for text semantics. UMLS semantic representations are likely to better capture information in report narratives, and thus may explain their superior performance. UMLS-based semantic classifiers were effective in identifying incidents by type and extreme-risk events, providing better generalizability than classifiers using BOW.
Publisher: Springer Science and Business Media LLC
Date: 29-07-2021
Publisher: BMJ
Date: 12-2018
DOI: 10.1136/BMJOPEN-2018-022163
Abstract: Self-management is widely promoted but less attention is focused on the work required from patients. To date, many in iduals struggle to practise self-management. ‘Patient work’, a concept that examines the ‘work’ involved in self-management, is an approach to understanding the tasks, effort, time and context from patient perspective. The purpose of our study is to use a novel approach combining non-obstructive observations via digital devices with in-depth qualitative data about health behaviours and motivations, to capture the full range of patient work experienced by people with type 2 diabetes and chronic comorbidities. It aims to yield comprehensive insights about ‘what works’ in self-management, potentially extending to populations with other chronic health conditions. This mixed-methods observational study involves a (1) prestudy interview and questionnaires, (2) a 24-hour period during which participants wear a camera and complete a time-use diary, and a (3) poststudy interview and study feedback. Adult participants living with type 2 diabetes with at least one chronic comorbidity will be recruited using purposive s ling to obtain a balanced gender ratio and of participants using insulin and those using only oral medication. Interviews will be analysed using thematic analysis. Data captured by digital devices, diaries and questionnaires will be used to analyse the duration, time, context and patterns of health-related behaviours. The study was approved by the Macquarie University Human Research Ethics Committee for Medical Sciences (reference number 5201700718). Participants will carry a wallet-sized card that explains the purpose of the study to third parties, and can remove the camera at any stage. Before the poststudy interview begins, participants will view the camera images in private and can delete any images. Should any images be used in future publications or presentations, identifying features such as human faces and names will be obscured.
Publisher: Elsevier BV
Date: 04-2022
DOI: 10.1016/J.ARTMED.2022.102261
Abstract: Fundus images have been widely used in routine examinations of ophthalmic diseases. For some diseases, the pathological changes mainly occur around the optic disc area therefore, detection and segmentation of the optic disc are critical pre-processing steps in fundus image analysis. Current machine learning based optic disc segmentation methods typically require manual segmentation of the optic disc for the supervised training. However, it is time consuming to annotate pixel-level optic disc masks and inevitably induces inter-subject variance. To address these limitations, we propose a weak label based Bayesian U-Net exploiting Hough transform based annotations to segment optic discs in fundus images. To achieve this, we build a probabilistic graphical model and explore a Bayesian approach with the state-of-the-art U-Net framework. To optimize the model, the expectation-maximization algorithm is used to estimate the optic disc mask and update the weights of the Bayesian U-Net, alternately. Our evaluation demonstrates strong performance of the proposed method compared to both fully- and weakly-supervised baselines.
Publisher: Elsevier BV
Date: 09-2006
Publisher: BMJ
Date: 19-08-2021
DOI: 10.1136/BMJEBM-2020-111379
Abstract: From its origins in epidemiology, evidence-based medicine has promulgated a rigorous approach to assessing the validity, impact and applicability of hypothesis-driven empirical research used to evaluate the utility of diagnostic tests, prognostic tools and therapeutic interventions. Machine learning, a subset of artificial intelligence, uses computer programs to discover patterns and associations within huge datasets which are then incorporated into algorithms used to assist diagnoses and predict future outcomes, including response to therapies. How do these two fields relate to one another? What are their similarities and differences, their strengths and weaknesses? Can each learn from, and complement, the other in rendering clinical decision-making more informed and effective?
Publisher: JMIR Publications Inc.
Date: 06-06-2017
Abstract: ranslating research into practice, especially the implementation of digital health technologies in routine care, is increasingly important. Yet, there are few studies examining the challenges of implementing patient-facing digital technologies in health care settings. he aim of this study was to report challenges experienced when implementing mobile apps for patients to support their postsurgical rehabilitation in an orthopedic setting. mobile app was tailored to the needs of patients undergoing rotator cuff repair. A 30-min usability session and a 12-week feasibility study were conducted with patients to evaluate the app in routine care. Implementation records (observation reports, issues log, and email correspondence) explored factors that hindered or facilitated patient acceptance. Interviews with clinicians explored factors that influenced app integration in routine care. articipant completion was low (47%, 9/19). Factors that affected patient acceptance included digital literacy, health status, information technology (IT) infrastructure at home, privacy concerns, time limitations, the role of a caregiver, inconsistencies in instruction received from clinicians and the app, and app advice not reflective of patient progress over time. Factors that negatively influenced app integration in routine care included competing demands among clinicians, IT infrastructure in health care settings, identifying the right time to introduce the app to patients, user interface complexity for older patients, lack of coordination among multidisciplinary clinicians, and technical issues with app installation. hree insights were identified for mobile app implementation in routine care: (1) apps for patients need to reflect their journey over time and in particular, postoperative apps ought to be introduced as part of preoperative care with opportunities for patients to learn and adopt the app during their postoperative journey (2) strategies to address digital literacy issues among patients and clinicians are essential and (3) impact of the app on patient outcomes and clinician workflow needs to be communicated, monitored, and reviewed. Lastly, digital health interventions should supplement but not replace patient interaction with clinicians.
Publisher: BMJ
Date: 09-2015
Publisher: Springer Science and Business Media LLC
Date: 07-05-2020
DOI: 10.1038/S41598-020-64588-Y
Abstract: Mutations in isocitrate dehydrogenase genes IDH1 and IDH2 are frequently found in diffuse and anaplastic astrocytic and oligodendroglial tumours as well as in secondary glioblastomas. As IDH is a very important prognostic, diagnostic and therapeutic biomarker for glioma, it is of paramount importance to determine its mutational status. The haematoxylin and eosin (H& E) staining is a valuable tool in precision oncology as it guides histopathology-based diagnosis and proceeding patient’s treatment. However, H& E staining alone does not determine the IDH mutational status of a tumour. Deep learning methods applied to MRI data have been demonstrated to be a useful tool in IDH status prediction, however the effectiveness of deep learning on H& E slides in the clinical setting has not been investigated so far. Furthermore, the performance of deep learning methods in medical imaging has been practically limited by small s le sizes currently available. Here we propose a data augmentation method based on the Generative Adversarial Networks (GAN) deep learning methodology, to improve the prediction performance of IDH mutational status using H& E slides. The H& E slides were acquired from 266 grade II-IV glioma patients from a mixture of public and private databases, including 130 IDH-wildtype and 136 IDH-mutant patients. A baseline deep learning model without data augmentation achieved an accuracy of 0.794 (AUC = 0.920). With GAN-based data augmentation, the accuracy of the IDH mutational status prediction was improved to 0.853 (AUC = 0.927) when the 3,000 GAN generated training s les were added to the original training set (24,000 s les). By integrating also patients’ age into the model, the accuracy improved further to 0.882 (AUC = 0.931). Our findings show that deep learning methodology, enhanced by GAN data augmentation, can support physicians in gliomas’ IDH status prediction.
Publisher: American Public Health Association
Date: 10-2020
Abstract: Objectives. To examine the role that bots play in spreading vaccine information on Twitter by measuring exposure and engagement among active users from the United States. Methods. We s led 53 188 US Twitter users and examined who they follow and retweet across 21 million vaccine-related tweets (January 12, 2017–December 3, 2019). Our analyses compared bots to human-operated accounts and vaccine-critical tweets to other vaccine-related tweets. Results. The median number of potential exposures to vaccine-related tweets per user was 757 (interquartile range [IQR] = 168–4435), of which 27 (IQR = 6–169) were vaccine critical, and 0 (IQR = 0–12) originated from bots. We found that 36.7% of users retweeted vaccine-related content, 4.5% retweeted vaccine-critical content, and 2.1% retweeted vaccine content from bots. Compared with other users, the 5.8% for whom vaccine-critical tweets made up most exposures more often retweeted vaccine content (62.9% odds ratio [OR] = 2.9 95% confidence interval [CI] = 2.7, 3.1), vaccine-critical content (35.0% OR = 19.0 95% CI = 17.3, 20.9), and bots (8.8% OR = 5.4 95% CI = 4.7, 6.3). Conclusions. A small proportion of vaccine-critical information that reaches active US Twitter users comes from bots.
Publisher: Elsevier BV
Date: 08-1992
Publisher: JMIR Publications Inc.
Date: 13-08-2013
DOI: 10.2196/RESPROT.2695
Publisher: Ovid Technologies (Wolters Kluwer Health)
Date: 09-2021
Publisher: Becaris Publishing Limited
Date: 05-2015
DOI: 10.2217/CER.15.12
Abstract: When providing care, clinicians are expected to take note of clinical practice guidelines, which offer recommendations based on the available evidence. However, guidelines may not apply to in idual patients with comorbidities, as they are typically excluded from clinical trials. Guidelines also tend not to provide relevant evidence on risks, secondary effects and long-term outcomes. Querying the electronic health records of similar patients may for many provide an alternate source of evidence to inform decision-making. It is important to develop methods to support these personalized observational studies at the point-of-care, to understand when these methods may provide valid results, and to validate and integrate these findings with those from clinical trials.
Publisher: Oxford University Press (OUP)
Date: 11-2010
Publisher: BMJ
Date: 04-2009
Publisher: JMIR Publications Inc.
Date: 18-09-2019
Abstract: lthough much effort is focused on improving the technical performance of artificial intelligence, there are compelling reasons to focus more on the implementation of this technology class to solve real-world applications. In this “last mile” of implementation lie many complex challenges that may make technically high-performing systems perform poorly. Instead of viewing artificial intelligence development as a linear one of algorithm development through to eventual deployment, there are strong reasons to take a more agile approach, iteratively developing and testing artificial intelligence within the context in which it finally will be used.
Publisher: JMIR Publications Inc.
Date: 17-06-2019
DOI: 10.2196/10896
Publisher: Elsevier BV
Date: 10-2005
DOI: 10.1016/J.IJMEDINF.2005.03.017
Abstract: An exploratory study to examine interruptive communication patterns of healthcare staff within an intensive care unit (ICU) during ward rounds. The study was conducted in a tertiary hospital in Sydney, Australia. Nine participants were observed in idually, for a total of 24 h, using the communication observation method (COM). The amount of time spent in conversation, the number of conversation initiating and number of turn-taking interruptions were recorded. Participants averaged 75% [95% confidence interval 72.8-77.2] of their time in communication events during ward rounds. There were 345 conversation-initiating interruptions (C.I.I.) and 492 turn-taking interruptions (T.T.I.). C.I.I. accounted for 37% [95% CI 33.9-40.1] of total communication event time (5 h: 53 min). T.T.I. accounted for 5.3% of total communication event time (56 min). This is the first study to specifically examine turn-taking interruptions in a clinical setting. Staff in this intensive care unit spent the majority of their time in communication. Turn taking interruptions within conversations occurred at about the same frequency as conversation initiating interruptions, which have been the subject of earlier studies. These results suggest that the overall burden of interruptions in some settings may be significantly higher than previously suspected.
Publisher: Elsevier BV
Date: 06-2007
DOI: 10.1016/J.IJMEDINF.2006.05.026
Abstract: Socio-technical systems (STS) analysis has provided us with a powerful framework with which to analyse the reasons behind the poor acceptability, uptake and performance of many information or communication technology systems (ICT). However, for the contribution of STS thinking to be more than simply a means of critiquing current practices and ICT systems, it needs to also contribute to the process of developing new and more effective ICT systems. Specifically, we need to develop a formal design language for translating our insights about the socio-technical nature of work, into design specifications that result in better interventions in the work place. We need to get 'technical' about what we mean and about what we want from a design, and we need to work alongside technologists to shape technology, as well as the processes, organisations and cultures within which they will be embedded. Indeed the process of design itself can be seen as a socio-technical one, and understanding the decision to design itself may allow us one day to stop designing for people, and create STS that sustainably design themselves.
Publisher: Elsevier BV
Date: 08-1990
Publisher: BMJ
Date: 2012
Publisher: Oxford University Press (OUP)
Date: 12-08-2022
Abstract: Climate change poses a major threat to the operation of global health systems, triggering large scale health events, and disrupting normal system operation. Digital health may have a role in the management of such challenges and in greenhouse gas emission reduction. This scoping review explores recent work on digital health responses and mitigation approaches to climate change. We searched Medline up to February 11, 2022, using terms for digital health and climate change. Included articles were categorized into 3 application domains (mitigation, infectious disease, or environmental health risk management), and 6 technical tasks (data sensing, monitoring, electronic data capture, modeling, decision support, and communication). The review was PRISMA-ScR compliant. The 142 included publications reported a wide variety of research designs. Publication numbers have grown substantially in recent years, but few come from low- and middle-income countries. Digital health has the potential to reduce health system greenhouse gas emissions, for ex le by shifting to virtual services. It can assist in managing changing patterns of infectious diseases as well as environmental health events by timely detection, reducing exposure to risk factors, and facilitating the delivery of care to under-resourced areas. While digital health has real potential to help in managing climate change, research remains preliminary with little real-world evaluation. Significant acceleration in the quality and quantity of digital health climate change research is urgently needed, given the enormity of the global challenge.
Publisher: JMIR Publications Inc.
Date: 28-06-2018
Abstract: espite many health benefits of physical activity, nearly a third of the world’s adult population is insufficiently active. Technological interventions, such as mobile apps, wearable trackers, and Web-based social networks, offer great promise in promoting physical activity, but little is known about users’ acceptability and long-term engagement with these interventions. he aim of this study was to understand users’ perspectives regarding a mobile social networking intervention to promote physical activity. articipants, mostly university students and staff, were recruited using purposive s ling techniques. Participants were enrolled in a 6-month feasibility study where they were provided with a wearable physical activity tracker (Fitbit Flex 2) and a wireless scale (Fitbit Aria) integrated with a social networking mobile app (named “fit.healthy.me”). We conducted semistructured, in-depth qualitative interviews and focus groups pre- and postintervention, which were recorded and transcribed verbatim. The data were analyzed in Nvivo 11 using thematic analysis techniques. n this study, 55 participants were enrolled 51% (28/55) were females, and the mean age was 23.6 (SD 4.6) years. The following 3 types of factors emerged from the data as influencing engagement with the intervention and physical activity: in idual (self-monitoring of behavior, goal setting, and feedback on behavior), social (social comparison, similarity and familiarity between users, and participation from other users in the network), and technological. In addition, automation and personalization were observed as enhancing the delivery of both in idual and social aspects. Technological limitations were mentioned as potential barriers to long-term usage. elf-regulatory techniques and social factors are important to consider when designing a physical activity intervention, but a one-size-fits-all approach is unlikely to satisfy different users’ preferences. Future research should adopt innovative research designs to test interventions that can adapt and respond to users’ needs and preferences throughout time.
Publisher: Elsevier BV
Date: 02-2009
DOI: 10.1016/J.JBI.2008.07.002
Abstract: Developments in molecular fingerprinting of pathogens with epidemic potential have offered new opportunities for improving detection and monitoring of biothreats. However, the lack of scalable definitions for infectious disease clustering presents a barrier for effective use and evaluation of new data types for early warning systems. A novel working definition of an outbreak based on temporal and spatial clustering of molecular genotypes is introduced in this paper. It provides an unambiguous way of clustering of causative pathogens and is adjustable to local disease prevalence and availability of public health resources. The performance of this definition in prospective surveillance is assessed in the context of community outbreaks of food-borne salmonellosis. Molecular fingerprinting augmented with the scalable clustering allows the detection of more than 50% of the potential outbreaks before they reach the midpoint of the cluster duration. Clustering in time by imposing restrictions on intervals between collection dates results in a smaller number of outbreaks but does not significantly affect the timeliness of detection. Clustering in space and time by imposing restrictions on the spatial and temporal distance between cases results in a further reduction in the number of outbreaks and decreases the overall efficiency of prospective detection. Innovative bacterial genotyping technologies can enhance early warning systems for public health by aiding the detection of moderate and small epidemics.
Publisher: Springer International Publishing
Date: 2016
Publisher: Springer Science and Business Media LLC
Date: 09-07-2014
Publisher: Georg Thieme Verlag KG
Date: 2011
DOI: 10.4338/ACI-2011-01-RA-0006
Abstract: Objective: To investigate whether strength of social feedback, i.e. other people who concur (or do not concur) with one’s own answer to a question, influences the way one answers health questions. Methods: Online prospective study. Two hundred and twenty-seven undergraduate students were recruited to use an online search engine to answer six health questions. Subjects recorded their pre- and post-search answers to each question and their level of confidence in these answers. After answering each question post-search, subjects were presented with a summary of post-search answers provided by previous subjects and were asked to answer the question again. Results: There was a statistically significant relationship between the absolute number of others with a different answer (the crowd’s opinion volume) and the likelihood of an in idual changing an answer (P .001). For most questions, no subjects changed their answer until the first 10–35 subjects completed the study. Subjects’ likelihood of changing answer increased as the percentage of others with a different answer (the crowd’s opinion density) increased (P=0.047). Overall, 98.3% of subjects did not change their answer when it concurred with the majority (i.e. %) of subjects, and that 25.7% of subjects changed their answer to the majority response when it did not concur with the majority. When subjects had a post-search answer that did not concur with the majority, they were 24% more likely to change answer than those with answers that concurred (P .001). Conclusion: This study provides empirical evidence that crowd influence, in the form of online social feedback, affects the way consumers answer health questions.
Publisher: Becaris Publishing Limited
Date: 07-2014
DOI: 10.2217/CER.14.31
Publisher: American Medical Association (AMA)
Date: 24-09-2012
Publisher: Becaris Publishing Limited
Date: 11-2013
DOI: 10.2217/CER.13.65
Publisher: Public Library of Science (PLoS)
Date: 22-03-2023
Publisher: JMIR Publications Inc.
Date: 24-10-2005
DOI: 10.2196/JMIR.7.5.E52
Publisher: Elsevier BV
Date: 08-2011
Publisher: Springer Science and Business Media LLC
Date: 29-02-2012
Abstract: Although it is well established that funding source influences the publication of clinical trials, relatively little is known about how funding influences trial design. We examined a public trial registry to determine how funding source shapes trial design among trials involving antihyperlipidemics. We used an automated process to identify and analyze 809 trials from a set of 72,564. Three networks representing industry-, collaboratively, and non-industry-funded trials were constructed. Each network comprised 18 drugs as nodes connected according to the number of comparisons made between them. The results indicated that industry-funded trials were more likely to compare across drugs and examine dyslipidemia as a condition, and less likely to register safety outcomes. The source of funding for clinical trials had a measurable effect on trial design, which helps quantify differences in research agendas. Improved monitoring of current clinical trials may be used to more closely align research agendas to clinical needs.
Publisher: SAGE Publications
Date: 14-05-2010
Publisher: AMPCo
Date: 07-2012
DOI: 10.5694/MJA12.10799
Publisher: Springer Science and Business Media LLC
Date: 31-10-2011
Publisher: Cold Spring Harbor Laboratory
Date: 28-11-2022
DOI: 10.1101/2022.11.27.22282798
Abstract: The COVID-19 pandemic serves as a clarion call to ensure health systems are better prepared to meet future emergencies. Digital Health could play a significant role in preparing health systems to bend and stretch their resources and cope with various shocks by facilitating tasks such as disease monitoring and care delivery. However, the health system’s needs during the crises have not been thoroughly examined from the perspective of health professionals in general and in the Australian health setting in particular. Here we describe the protocol of a qualitative design to learn from frontline healthcare workers’ experiences of the pandemic response that can guide preparation for future crises using digital health.
Publisher: SAGE Publications
Date: 07-2015
DOI: 10.1177/0310057X1504300407
Abstract: Prophylaxis for surgical site infection (SSI) is often at variance with guidelines, despite the prevalence of SSI and its associated cost, morbidity, and mortality. The CareTrack Australia study, undertaken by a number of the authors, demonstrated that appropriate care (in line with evidence- or consensus-based guidelines) was provided at 38% of eligible SSI healthcare encounters. Here, we report the indicator-level CareTrack Australia findings for SSI prophylaxis. Indicators were extracted from Australian and international clinical guidelines and ratified by clinical experts. A s le designed to be representative of the Australian population was recruited (n=1154). Participants’ medical records were reviewed and analysed for compliance with the five SSI indicators. The main outcome measure was the percentage of eligible healthcare encounters with documented compliance with indicators for appropriate SSI prophylaxis. Of the 35,145 CareTrack Australia encounters, 702 (2%) were eligible for scoring against the SSI indicators. Where antibiotics were recommended, compliance was 49% for contaminated surgery, 57% for clean-contaminated surgery and 85% for surgery involving a prosthesis: these fell to 8%, 10% and 14%, respectively (an average of 11%), when currently recommended timing of antibiotic administration was included. Where antibiotics were not indicated, 72% of patients still received them. SSI prophylaxis in our s le was poor over two-thirds of patients were given antibiotics, whether indicated or not, mainly at the wrong time. There is a need for national agreement on clinical standards, indicators and tools to guide, document and monitor SSI prophylaxis, with both local and national measures to increase and monitor their uptake.
Publisher: Wiley
Date: 24-03-2023
DOI: 10.1111/TRF.17315
Abstract: Managing critical bleeding with massive transfusion (MT) requires a multidisciplinary team, often physically separated, to perform several simultaneous tasks at short notice. This places a significant cognitive load on team members, who must maintain situational awareness in rapidly changing scenarios. Similar resuscitation scenarios have benefited from the use of clinical decision support (CDS) tools. A multicenter, multidisciplinary, user‐centered design (UCD) study was conducted to design a computerized CDS for MT. This study included analysis of the problem context with a cognitive walkthrough, development of a user requirement statement, and co‐design with users of prototypes for testing. The final prototype was evaluated using qualitative assessment and the System Usability Scale (SUS). Eighteen participants were recruited across four institutions. The first UCD cycle resulted in the development of four prototype interfaces that addressed the user requirements and context of implementation. Of these, the preferred interface was further developed in the second UCD cycle to create a high‐fidelity web‐based CDS for MT. This prototype was evaluated by 15 participants using a simulated bleeding scenario and demonstrated an average SUS of 69.3 (above average, SD 16) and a clear interface with easy‐to‐follow blood product tracking. We used a UCD process to explore a highly complex clinical scenario and develop a prototype CDS for MT that incorporates distributive situational awareness, supports multiple user roles, and allows simulated MT training. Evaluation of the impact of this prototype on the efficacy and efficiency of managing MT is currently underway.
Publisher: Georg Thieme Verlag KG
Date: 2011
DOI: 10.4338/ACI-2011-02-RA-0011
Abstract: Background: Effective communication is essential to safe and efficient patient care. Additionally, many health information technology (HIT) developments, innovations, and standards aim to implement processes to improve data quality and integrity of electronic health records (EHR) for the purpose of clinical information exchange and communication. Objective: We aimed to understand the current patterns and perceptions of communication of common goals in the ICU using the distributed cognition and clinical communication space theoretical frameworks. Methods: We conducted a focus group and 5 interviews with ICU clinicians and observed 59.5 hours of interdisciplinary ICU morning rounds. Results: Clinicians used an EHR system, which included electronic documentation and computerized provider order entry (CPOE), and paper artifacts for documentation yet, preferred the verbal communication space as a method of information exchange because they perceived that the documentation was often not updated or efficient for information retrieval. These perceptions that the EHR is a “shift behind” may lead to a further reliance on verbal information exchange, which is a valuable clinical communication activity, yet, is subject to information loss. Conclusions: Electronic documentation tools that, in real time, capture information that is currently verbally communicated may increase the effectiveness of communication.
Publisher: Elsevier BV
Date: 2021
Publisher: Oxford University Press (OUP)
Date: 05-2009
DOI: 10.1197/JAMIA.M3183
Publisher: Springer Science and Business Media LLC
Date: 17-03-2009
Publisher: Springer Science and Business Media LLC
Date: 10-08-2012
Publisher: BMJ
Date: 10-2021
DOI: 10.1136/BMJHCI-2021-100444
Abstract: To date, many artificial intelligence (AI) systems have been developed in healthcare, but adoption has been limited. This may be due to inappropriate or incomplete evaluation and a lack of internationally recognised AI standards on evaluation. To have confidence in the generalisability of AI systems in healthcare and to enable their integration into workflows, there is a need for a practical yet comprehensive instrument to assess the translational aspects of the available AI systems. Currently available evaluation frameworks for AI in healthcare focus on the reporting and regulatory aspects but have little guidance regarding assessment of the translational aspects of the AI systems like the functional, utility and ethical components. To address this gap and create a framework that assesses real-world systems, an international team has developed a translationally focused evaluation framework termed ‘Translational Evaluation of Healthcare AI (TEHAI)’. A critical review of literature assessed existing evaluation and reporting frameworks and gaps. Next, using health technology evaluation and translational principles, reporting components were identified for consideration. These were independently reviewed for consensus inclusion in a final framework by an international panel of eight expert. TEHAI includes three main components: capability, utility and adoption. The emphasis on translational and ethical features of the model development and deployment distinguishes TEHAI from other evaluation instruments. In specific, the evaluation components can be applied at any stage of the development and deployment of the AI system. One major limitation of existing reporting or evaluation frameworks is their narrow focus. TEHAI, because of its strong foundation in translation research models and an emphasis on safety, translational value and generalisability, not only has a theoretical basis but also practical application to assessing real-world systems. The translational research theoretic approach used to develop TEHAI should see it having application not just for evaluation of clinical AI in research settings, but more broadly to guide evaluation of working clinical systems.
Publisher: JMIR Publications Inc.
Date: 15-12-2015
DOI: 10.2196/JMIR.4734
Publisher: Springer Science and Business Media LLC
Date: 30-10-2018
DOI: 10.1038/S41746-018-0069-6
Abstract: The original version of the published Article contained an error in the spelling of the third Author’s name. “John Halamaka” has been changed to “John Halamka”. This has been corrected in the HTML and PDF version of the Article.
Publisher: Oxford University Press (OUP)
Date: 13-04-2018
DOI: 10.1093/JAMIA/OCY028
Abstract: Many research fields, including psychology and basic medical sciences, struggle with poor reproducibility of reported studies. Biomedical and health informatics is unlikely to be immune to these challenges. This paper explores replication in informatics and the unique challenges the discipline faces. Narrative review of recent literature on research replication challenges. While there is growing interest in re-analysis of existing data, experimental replication studies appear uncommon in informatics. Context effects are a particular challenge as they make ensuring replication fidelity difficult, and the same intervention will never quite reproduce the same result in different settings. Replication studies take many forms, trading-off testing validity of past findings against testing generalizability. Exact and partial replication designs emphasize testing validity while quasi and conceptual studies test generalizability of an underlying model or hypothesis with different methods or in a different setting. The cost of poor replication is a weakening in the quality of published research and the evidence-based foundation of health informatics. The benefits of replication include increased rigor in research, and the development of evaluation methods that distinguish the impact of context and the nonreproducibility of research. Taking replication seriously is essential if biomedical and health informatics is to be an evidence-based discipline.
Publisher: Oxford University Press (OUP)
Date: 09-2007
DOI: 10.1197/JAMIA.M2411
Publisher: Georg Thieme Verlag KG
Date: 04-2018
Abstract: Objective To conduct a replication study to validate previously identified significant risks and inefficiencies associated with the use of speech recognition (SR) for documentation within an electronic health record (EHR) system. Methods Thirty-five emergency department clinicians undertook randomly allocated clinical documentation tasks using keyboard and mouse (KBM) or SR using a commercial EHR system. The experiment design, setting, and tasks (E2) replicated an earlier study (E1), while technical integration issues that may have led to poorer SR performance were addressed. Results Complex tasks were significantly slower to complete using SR (16.94%) than KBM (KBM: 191.9 s, SR: 224.4 s p = 0.009 CI, 11.9–48.3), replicating task completion times observed in the earlier experiment. Errors (non-typographical) were significantly higher with SR compared with KBM for both simple (KBM: 3, SR: 84 p 0.001 CI, 1.5–2.5) and complex tasks (KBM: 23, SR: 53 p = 0.001 CI, 0.5–1.0), again replicating earlier results (E1: 170, E2: 163 p = 0.660 CI, 0.0–0.0). Typographical errors were reduced significantly in the new study (E1: 465, E2: 150 p 0.001 CI, 2.0–3.0). Discussion The results of this study replicate those reported earlier. The use of SR for clinical documentation within an EHR system appears to be consistently associated with decreased time efficiencies and increased errors. Modifications implemented to optimize SR integration in the EHR seem to have resulted in minor improvements that did not fundamentally change overall results. Conclusion This replication study adds further evidence for the poor performance of SR-assisted clinical documentation within an EHR. Replication studies remain rare in informatics literature, especially where study results are unexpected or have significant implication such studies are clearly needed to avoid overdependence on the results of a single study.
Publisher: JMIR Publications Inc.
Date: 23-09-2013
DOI: 10.2196/JMIR.2682
Publisher: JMIR Publications Inc.
Date: 22-05-2020
Abstract: ffective behavior change interventions may require ongoing personalized support for users. Rapid developments in digital technology and artificial intelligence are giving rise to more advanced types of personalized interventions that can analyze large amounts of data to provide real-time, contextualized support. Despite growing research attention, there is still a lack of consensus in the literature about what is considered a personalized system, and how to design such system. This paper provides a definition of personalization and proposes a set of building blocks to design and implement personalized behavior change interventions, drawing on concepts from control systems engineering. We also discuss existing challenges in evaluating the net effects of personalized interventions and outline future directions in this field.
Publisher: Oxford University Press (OUP)
Date: 05-2000
DOI: 10.1136/JAMIA.2000.0070215
Abstract: Information economics offers insights into the dynamics of information across networked systems like the Internet. An information marketplace is different from other marketplaces because an information good is not actually consumed and can be reproduced and distributed at almost no cost. For information producers to remain profitable, they will need to minimize their exposure to competition. For ex le, information can be sold by charging site access rather than information access fees, or it can be bundled with other information or "versioned." For information consumers, a variation of Malthus' law predicts that the exponential growth in information will mean that specific information will become increasingly expensive to find, because search costs will grow but human attention will remain limited. Furthermore, the low cost of creating poor-quality information on the Web means that the low-quality information may eventually sw high-quality resources. The use of reputable information portals on the Web, or smart search technologies, may help in the short run, but it is unclear whether an "information famine" is avoidable in the longer term.
Publisher: BMJ
Date: 10-2017
DOI: 10.1136/BMJOPEN-2016-014048
Abstract: Despite widespread availability of clinical practice guidelines (CPGs), considerable gaps continue between the care that is recommended (‘appropriate care’) and the care provided. Problems with current CPGs are commonly cited as barriers to providing ’appropriate care'. Our study aims to develop and test an alternative method to keep CPGs accessible and up to date. This method aims to mitigate existing problems by using a single process to develop clinical standards (embodied in clinical indicators) collaboratively with researchers, healthcare professionals, patients and consumers. A transparent and inclusive online curated (purpose-designed, custom-built, wiki-type) system will use an ongoing and iterative documentation process to facilitate synthesis of up-to-date information and make available its provenance. All participants are required to declare conflicts of interest. This protocol describes three phases: engagement of relevant stakeholders design of a process to develop clinical standards (embodied in indicators) for ‘appropriate care’ for common medical conditions and evaluation of our processes, products and feasibility. A modified e-Delphi process will be used to gain consensus on ‘appropriate care’ for a range of common medical conditions. Clinical standards and indicators will be developed through searches of national and international guidelines, and formulated with explicit criteria for inclusion, exclusion, time frame and setting. Healthcare professionals and consumers will review the indicators via the wiki-based modified e-Delphi process. Reviewers will declare conflicts of interest which will be recorded and managed according to an established protocol. The provenance of all indicators and suggestions included or excluded will be logged from indicator inception to finalisation. A mixed-methods formative evaluation of our research methodology will be undertaken. Human Research Ethics Committee approval has been received from the University of South Australia. We will submit the results of the study to relevant journals and offer national and international presentations.
Publisher: Elsevier BV
Date: 05-2013
DOI: 10.1016/J.IJMEDINF.2012.11.014
Abstract: To collect and critically review patient safety initiatives for health information technology (HIT). Publicly promulgated set of advisories, recommendations, guidelines, or standards potentially addressing safe system design, build, implementation or use were identified by searching the websites of regional and national agencies and programmes in a non-exhaustive set of exemplar countries including England, Denmark, the Netherlands, the USA, Canada and Australia. Initiatives were categorised by type and software systems covered. We found 27 patient safety initiatives for HIT predominantly dealing with software systems for health professionals. Three initiatives addressed consumer systems. Seven of the initiatives specifically dealt with software for diagnosis and treatment, which are regulated as medical devices in England, Denmark and Canada. Four initiatives dealt with blood bank and image management software which is regulated in the USA. Of the 16 initiatives directed at unregulated software, 11 were aimed at increasing standardisation using guidelines and standards for safe system design, build, implementation and use. Three initiatives for unregulated software were aimed at certification in the USA, Canada and Australia. Safety is addressed alongside interoperability in the Australian certification programme but it is not explicitly addressed in the US and Canadian programmes, though conformance with specific functionality, interoperability, security and privacy requirements may lead to safer systems. England appears to have the most comprehensive safety management programme for unregulated software, incorporating safety assurance at a local healthcare organisation level based on standards for risk management and user interface design, with national incident monitoring and a response function. There are significant gaps in the safety initiatives for HIT systems. Current initiatives are largely focussed on software. With the exception of diagnostic, prognostic, monitoring and treatment software, which are subject to medical device regulations in some countries, the safety of the most common types of HIT systems such as EHRs and CPOE without decision support is not being explicitly addressed in most nations. Appropriate mechanisms for safety assurance are required for the full range of HIT systems for health professionals and consumers including all software and hardware throughout the system lifecycle. In addition to greater standardisation and oversight to ensure safe system design and build, appropriate implementation and use of HIT is critical to ensure patient safety.
Publisher: AMPCo
Date: 11-2012
DOI: 10.5694/MJA12.11210
Publisher: Oxford University Press (OUP)
Date: 09-2018
Publisher: Oxford University Press (OUP)
Date: 09-2000
DOI: 10.1136/JAMIA.2000.0070453
Abstract: Recent research has studied the communication behaviors of clinical hospital workers and observed a tendency for these workers to use communication behaviors that were often inefficient. Workers were observed to favor synchronous forms of communication, such as telephone calls and chance face-to-face meetings with colleagues, even when these channels were not effective. Synchronous communication also contributes to a highly interruptive working environment, increasing the potential for clinical errors to be made. This paper reviews these findings from a cognitive psychological perspective, focusing on current understandings of how human memory functions and on the potential consequences of interruptions on the ability to work effectively. It concludes by discussing possible communication technology interventions that could be introduced to improve the clinical communication environment and suggests directions for future research.
Publisher: Oxford University Press (OUP)
Date: 07-2009
DOI: 10.1111/J.1574-6976.2009.00175.X
Abstract: Gene cassettes are small mobile elements, consisting of little more than a single gene and recombination site, which are captured by larger elements called integrons. Several cassettes may be inserted into the same integron forming a tandem array. The discovery of integrons in the chromosome of many species has led to the identification of thousands of gene cassettes, mostly of unknown function, while integrons associated with transposons and plasmids carry mainly antibiotic resistance genes and constitute an important means of spreading resistance. An updated compilation of gene cassettes found in sequences of such 'mobile resistance integrons' in GenBank was facilitated by a specially developed automated annotation system. At least 130 different (<98% identical) cassettes that carry known or predicted antibiotic resistance genes were identified, along with many cassettes of unknown function. We list exemplar GenBank accession numbers for each and address some nomenclature issues. Various modifications to cassettes, some of which may be useful in tracking cassette epidemiology, are also described. Despite potential biases in the GenBank dataset, preliminary analysis of cassette distribution suggests interesting differences between cassettes and may provide useful information to direct more systematic studies.
Publisher: Oxford University Press (OUP)
Date: 28-10-190728635
Abstract: To understand the nature of health consumer self-management workarounds during the COVID-19 pandemic to classify these workarounds using the Substitution, Augmentation, Modification, and Redefinition (SAMR) framework and to see how digital tools had assisted these workarounds. We assessed 15 self-managing elderly patients with Type 2 diabetes, multiple chronic comorbidities, and low digital literacy. Interviews were conducted during COVID-19 lockdowns in May–June 2020 and participants were asked about how their self-management had differed from before. Each instance of change in self-management were identified as consumer workarounds and were classified using the SAMR framework to assess the extent of change. We also identified instances where digital technology assisted with workarounds. Consumer workarounds in all SAMR levels were observed. Substitution, describing change in work quality or how basic information was communicated, was easy to make and involved digital tools that replaced face-to-face communications, such as the telephone. Augmentation, describing changes in task mechanisms that enhanced functional value, did not include any digital tools. Modification, which significantly altered task content and context, involved more complicated changes such as making video calls. Redefinition workarounds created tasks not previously required, such as using Google Home to remotely babysit grandchildren, had transformed daily routines. Health consumer workarounds need further investigation as health consumers also use workarounds to bypass barriers during self-management. The SAMR framework had classified the health consumer workarounds during COVID, but the framework needs further refinement to include more aspects of workarounds.
Publisher: Springer Science and Business Media LLC
Date: 02-04-2012
Publisher: Elsevier BV
Date: 02-1993
Publisher: Springer Science and Business Media LLC
Date: 12-06-2017
Publisher: Elsevier BV
Date: 10-2009
DOI: 10.1080/00313020903071447
Abstract: Several virulent clones of group B streptococcus (GBS) are known to be associated with certain serotypes and molecular epidemiological markers. It is unclear, however, whether the clinical significance of GBS can be predicted based solely on such molecular markers. The aim of this study was to test the hypothesis that GBS virulence can be predicted by using the molecular epidemiology markers. We examined 912 human GBS isolates in which 18 distinct molecular markers (including virulence-associated mobile genetic elements, polysaccharide capsule determinants, variants of a surface antigen and invasin, and antibiotic resistance-related genes) were characterised using multiplex PCR based reverse line blot assay. All strains were classified in clinically relevant invasive and colonising categories. Relationships between molecular markers and clinical phenotypes were tested using statistical and machine learning analyses. Classifier performance was evaluated by the area under receiver operator characteristic curve (AUC). The distribution of serotypes was comparable with those in previous reports (Ia, 22.1% III, 34.7% V, 17.7%). From single marker analyses, only alp3 (which encodes a surface protein antigen, commonly associated with serotype V) showed an increased association with invasive diseases (OR = 2.93, p = 0.0003). Molecular serotype (MS) II (OR = 10.0, p = 0.0007) had a significant association with early-onset neonatal disease when compared with late-onset diseases. Predictive analysis with logistic regression and machine learning classifiers, however, only yielded weak predictive power (AUC 0.56-0.71, stratified 10-fold cross-validation) across all the subgroups. While some molecular epidemiological markers are important in defining GBS clusters, a definitive predictive relationship between the molecular markers and clinical outcomes may be lacking.
Publisher: Oxford University Press (OUP)
Date: 09-2008
DOI: 10.1197/JAMIA.M2765
Publisher: Elsevier BV
Date: 02-2007
DOI: 10.1016/J.IJMEDINF.2006.03.006
Abstract: Online evidence retrieval systems are a potential tool in supporting evidence-based practice. Effective and tested techniques for assessing the impact of these systems on care delivery and patient outcomes are limited. In this study we applied the critical incident (CI) and journey mapping (JM) techniques to assess the integration of an online evidence system into everyday clinical practice and its impact on decision making and patient care. To elicit incidents semi-structured interviews were conducted with 29 clinicians (13 hospital physician specialists, 16 clinical nurse consultants (CNCs)) who were experienced users of the online evidence system. Clinicians were also asked questions about how they had first used the system and how their use and experiences had changed over time. These narrative accounts were then mapped and scored using the journey mapping technique. Clinicians generated 85 critical incidents. Three categories of impact were identified: impact on clinical practice, impact on in idual clinicians and impact on colleagues through the dissemination of information gained from the online evidence system. One quarter of these included specific ex les of system use leading to improvements in patient care. Clinicians obtained an average journey mapping score of 22 out of a possible score of 36, demonstrating a good level of system integration. Average scores of doctors and CNCs were similar. However in iduals with the same scores often had very different journeys in system integration. The CI technique provided clear ex les of the way in which system use had influenced practice and care delivery. The JM technique was found to be a useful method for providing a quantification of the different ways and extent to which, clinicians had integrated system use into practice, and insights into how system use can influence organisational culture. The development of the journey mapping stages provides a structure by which the program logic of a clinical information system and its desired outcomes can be made explicit and be based upon users' experiences in everyday practice. Further work is required using this technique to assess its value as an evaluation method.
Publisher: Elsevier BV
Date: 03-2018
DOI: 10.1016/J.JBI.2018.01.008
Abstract: Clinical trial registries can be used to monitor the production of trial evidence and signal when systematic reviews become out of date. However, this use has been limited to date due to the extensive manual review required to search for and screen relevant trial registrations. Our aim was to evaluate a new method that could partially automate the identification of trial registrations that may be relevant for systematic review updates. We identified 179 systematic reviews of drug interventions for type 2 diabetes, which included 537 clinical trials that had registrations in ClinicalTrials.gov. Text from the trial registrations were used as features directly, or transformed using Latent Dirichlet Allocation (LDA) or Principal Component Analysis (PCA). We tested a novel matrix factorisation approach that uses a shared latent space to learn how to rank relevant trial registrations for each systematic review, comparing the performance to document similarity to rank relevant trial registrations. The two approaches were tested on a holdout set of the newest trials from the set of type 2 diabetes systematic reviews and an unseen set of 141 clinical trial registrations from 17 updated systematic reviews published in the Cochrane Database of Systematic Reviews. The performance was measured by the number of relevant registrations found after examining 100 candidates (recall@100) and the median rank of relevant registrations in the ranked candidate lists. The matrix factorisation approach outperformed the document similarity approach with a median rank of 59 (of 128,392 candidate registrations in ClinicalTrials.gov) and recall@100 of 60.9% using LDA feature representation, compared to a median rank of 138 and recall@100 of 42.8% in the document similarity baseline. In the second set of systematic reviews and their updates, the highest performing approach used document similarity and gave a median rank of 67 (recall@100 of 62.9%). A shared latent space matrix factorisation method was useful for ranking trial registrations to reduce the manual workload associated with finding relevant trials for systematic review updates. The results suggest that the approach could be used as part of a semi-automated pipeline for monitoring potentially new evidence for inclusion in a review update.
Publisher: Oxford University Press (OUP)
Date: 07-2005
DOI: 10.1197/JAMIA.M1798
Publisher: Elsevier BV
Date: 08-2022
DOI: 10.1053/J.AJKD.2021.12.007
Abstract: Patients receiving hemodialysis experience high symptom burden and low quality of life (QOL). Electronic patient-reported outcome measures (e-PROMs) monitoring with feedback to clinicians may be an acceptable intervention to improve health-related QOL for patients receiving hemodialysis. This study explored patient and clinician perspectives on e-PROMs monitoring with feedback to clinicians. Qualitative study. 41 participants (12 patients, 13 nephrologists, 16 dialysis nurses) who participated in a 6-month feasibility pilot study of adults receiving facility-based hemodialysis across 4 Australian units. The intervention consisted of electronic symptom monitoring with feedback to clinicians, who also received evidence-based symptom management recommendations to improve health-related QOL. Semistructured interviews and focus group discussions explored the feasibility and acceptability of e-PROMs monitoring with feedback to clinicians. We conducted a thematic analysis of transcripts. We identified 4 themes: enabling efficient, systematic, and multidisciplinary patient-centered care experiencing limited data and options for symptom management requiring familiarity with technology and processes and identifying barriers and competing priorities. While insufficient patient engagement, logistic/technical challenges, and delayed symptom feedback emerged as barriers to implementation, active engagement by nurses in encouraging and supporting patients during survey completion and clinicians' prompt action after symptom feedback were considered to be facilitators to implementation. Limited generalizability due to inclusion of English-speaking participants only. Patients, nurses, and nephrologists considered e-PROMs monitoring with feedback to clinicians feasible for symptom management in hemodialysis. Clinician engagement, patient support, reliable technology, timely symptom feedback, and interventions to address symptom burden are likely to improve its implementation within research and clinical settings.
Publisher: Elsevier BV
Date: 11-2018
Publisher: JMIR Publications Inc.
Date: 07-12-2017
Publisher: BMJ
Date: 12-2021
DOI: 10.1136/BMJHCI-2021-100450
Abstract: Different stakeholders may hold varying attitudes towards artificial intelligence (AI) applications in healthcare, which may constrain their acceptance if AI developers fail to take them into account. We set out to ascertain evidence of the attitudes of clinicians, consumers, managers, researchers, regulators and industry towards AI applications in healthcare. We undertook an exploratory analysis of articles whose titles or abstracts contained the terms ‘artificial intelligence’ or ‘AI’ and ‘medical’ or ‘healthcare’ and ‘attitudes’, ‘perceptions’, ‘opinions’, ‘views’, ‘expectations’. Using a snowballing strategy, we searched PubMed and Google Scholar for articles published 1 January 2010 through 31 May 2021. We selected articles relating to non-robotic clinician-facing AI applications used to support healthcare-related tasks or decision-making. Across 27 studies, attitudes towards AI applications in healthcare, in general, were positive, more so for those with direct experience of AI, but provided certain safeguards were met. AI applications which automated data interpretation and synthesis were regarded more favourably by clinicians and consumers than those that directly influenced clinical decisions or potentially impacted clinician–patient relationships. Privacy breaches and personal liability for AI-related error worried clinicians, while loss of clinician oversight and inability to fully share in decision-making worried consumers. Both clinicians and consumers wanted AI-generated advice to be trustworthy, while industry groups emphasised AI benefits and wanted more data, funding and regulatory certainty. Certain expectations of AI applications were common to many stakeholder groups from which a set of dependencies can be defined. Stakeholders differ in some but not all of their attitudes towards AI. Those developing and implementing applications should consider policies and processes that bridge attitudinal disconnects between different stakeholders.
Publisher: BMJ
Date: 08-2020
DOI: 10.1136/BMJHCI-2020-100153
Abstract: To measure lookup rates of externally held primary care records accessed in emergency care and identify patient characteristics, conditions and potential consequences associated with access. Rates of primary care record access and re-presentation to the emergency department (ED) within 30 days and hospital admission. A retrospective observational study of 77 181 ED presentations over 4 years and 9 months, analysing 8184 index presentations in which patients’ primary care records were accessed from the ED. Data were compared with 17 449 randomly selected index control presentations. Analysis included propensity score matching for age and triage categories. 6.3% of overall ED presentations triggered a lookup (rising to 8.3% in year 5) 83.1% of patients were only looked up once and 16.9% of patients looked up on multiple occasions. Lookup patients were on average 25 years older (z=−9.180, p .001, r=0.43). Patients with more urgent triage classifications had their records accessed more frequently (z=−36.47, p .001, r=0.23). Record access was associated with a significant but negligible increase in hospital admission (χ 2 (1, n=13 120)=98.385, p .001, phi=0.087) and readmission within 30 days (χ 2 (1, n=13 120)=86.288, p .001, phi=0.081). Emergency care clinicians access primary care records more frequently for older patients or those in higher triage categories. Increased levels of inpatient admission and re-presentation within 30 days are likely linked to age and triage categories. Further studies should focus on the impact of record access on clinical and process outcomes and which record elements have the most utility to shape clinical decisions.
Publisher: Oxford University Press (OUP)
Date: 14-11-2022
Publisher: Springer Science and Business Media LLC
Date: 28-10-2008
Publisher: Springer New York
Date: 03-10-2010
Publisher: Springer Science and Business Media LLC
Date: 22-11-2019
DOI: 10.1038/S41746-019-0190-1
Abstract: Clinicians spend a large amount of time on clinical documentation of patient encounters, often impacting quality of care and clinician satisfaction, and causing physician burnout. Advances in artificial intelligence (AI) and machine learning (ML) open the possibility of automating clinical documentation with digital scribes, using speech recognition to eliminate manual documentation by clinicians or medical scribes. However, developing a digital scribe is fraught with problems due to the complex nature of clinical environments and clinical conversations. This paper identifies and discusses major challenges associated with developing automated speech-based documentation in clinical settings: recording high-quality audio, converting audio to transcripts using speech recognition, inducing topic structure from conversation data, extracting medical concepts, generating clinically meaningful summaries of conversations, and obtaining clinical data for AI and ML algorithms.
Publisher: Oxford University Press (OUP)
Date: 27-07-2017
DOI: 10.1093/JAMIA/OCX073
Abstract: To compare the efficiency and safety of using speech recognition (SR) assisted clinical documentation within an electronic health record (EHR) system with use of keyboard and mouse (KBM). Thirty-five emergency department clinicians undertook randomly allocated clinical documentation tasks using KBM or SR on a commercial EHR system. Tasks were simple or complex, and with or without interruption. Outcome measures included task completion times and observed errors. Errors were classed by their potential for patient harm. Error causes were classified as due to IT system/system integration, user interaction, comprehension, or as typographical. User-related errors could be by either omission or commission. Mean task completion times were 18.11% slower overall when using SR compared to KBM (P = .001), 16.95% slower for simple tasks (P = .050), and 18.40% slower for complex tasks (P = .009). Increased errors were observed with use of SR (KBM 32, SR 138) for both simple (KBM 9, SR 75 P & 0.001) and complex (KBM 23, SR 63 P & 0.001) tasks. Interruptions did not significantly affect task completion times or error rates for either modality. For clinical documentation, SR was slower and increased the risk of documentation errors, including errors with the potential to cause clinical harm compared to KBM. Some of the observed increase in errors may be due to suboptimal SR to EHR integration and workflow. Use of SR to drive interactive clinical documentation in the EHR requires careful evaluation. Current generation implementations may require significant development before they are safe and effective. Improving system integration and workflow, as well as SR accuracy and user-focused error correction strategies, may improve SR performance.
Publisher: Cambridge University Press (CUP)
Date: 03-1992
DOI: 10.1017/S0269888900006159
Abstract: The representation of physical systems using qualitative formalisms is examined in this review, with an emphasis on recent developments in the area. The push to develop reasoning systems incorporating deep knowledge originally focused on naive physical representations, but has now shifted to more formal ones based on qualitative mathematics. The qualitative differential constraint formalism used in systems like QSIM is examined, and current efforts to link this to competing representations like Qualitative Process Theory are noted. Inference and representation are intertwined, and the decision to represent notions like causality explicitly, or infer it from other properties, has shifted as the field has developed. The evolution of causal and functional representations is thus examined. Finally, a growing body of work that allows reasoning systems to utilize multiple representations of a system is identified. Dimensions along which multiple model hierarchies could be constructed are examined, including mode of behaviour, granularity, ontology, and representational depth.
Publisher: Informa UK Limited
Date: 06-2004
Publisher: Ovid Technologies (Wolters Kluwer Health)
Date: 12-2008
DOI: 10.1097/QCO.0B013E3283118932
Abstract: To explore recent developments in computerized evidence-based guidelines and decision support systems that have been designed to improve the effectiveness and efficiency of antibiotic prescribing. The most frequently utilized decision support systems are electronic guidelines and protocols, especially for empirical selection of antibiotics. The majority of decision support systems result in improvement in clinical performance and, in at least half of the published trials, in patient outcomes. Despite the reported successes of in idual applications, the safety of electronic prescribing systems in routine practice has been identified recently as an issue of potential concern. Bioinformatics-assisted prescribing may contribute to reducing the complexities of prescribing combinations of antimicrobials in the era of multidrug resistance. The reemerging interest in prescribing decision support reflects the recent change in emphasis from support for diagnostic decisions towards support for patient management and from systems targeting a broad range of clinical diagnoses to task specific and condition-specific decision aids.
Publisher: BMJ
Date: 19-08-2010
Abstract: To explore the feasibility of using statistical text classification techniques to automatically categorise clinical incident reports. Statistical text classifiers based on Naïve Bayes and Support Vector Machine algorithms were trained and tested on incident reports submitted by public hospitals to identify two classes of clinical incidents: inadequate clinical handover and incorrect patient identification. Each classifier was trained on 600 reports (300 positives, 300 negatives), and tested on 372 reports (248 positives, 124 negatives). The results were evaluated using standard measures of accuracy, precision, recall, F-measure and area under curve (AUC) of receiver operating characteristics (ROC). Classifier learning rates were also evaluated, using classifier accuracy against training set size. All classifiers performed well in categorising clinical handover and patient identification incidents. Naïve Bayes attained the best performance on handover incidents, correctly identifying 86.29% of reporter-classified incidents (precision = 0.84, recall = .90, F-measure = 0.87, AUC = 0.93) and 91.53% of expert-classified incidents (precision = 0.87, recall = 0.98, F-measure = 0.92, AUC = 0.97). For patient identification incidents, the best results were obtained when Support Vector Machine with radial-basis function kernel was used to classify reporter-classified reports (accuracy = 97.98%, precision = 0.98, recall = 0.98, F-measure = 0.98, AUC = 1.00) and when Naïve Bayes was used on expert-classified reports (accuracy = 95.97%, precision = 0.95, recall = 0.98, F-measure = 0.96, AUC = 0.99). A relatively small training set was found to be adequate, with most classifiers achieving an accuracy above 80% when the training set size was as small as 100 s les. This study demonstrates the feasibility of using text classification techniques to automatically categorise clinical incident reports.
Publisher: ACM
Date: 25-04-2020
Publisher: American Medical Association (AMA)
Date: 23-01-2018
Publisher: Oxford University Press (OUP)
Date: 09-2010
Publisher: Springer Science and Business Media LLC
Date: 24-02-2015
Publisher: Elsevier BV
Date: 02-2020
Publisher: Springer Science and Business Media LLC
Date: 24-03-2017
Publisher: Springer Berlin Heidelberg
Date: 1991
Publisher: Oxford University Press (OUP)
Date: 03-2019
DOI: 10.1093/IWC/IWZ015
Abstract: Varying understandings of UX in conversational interfaces literature. A UX assessment framework with UX dimensions and their relevant attributes. Descriptions of the six main questionnaires for evaluating conversational interfaces. A comparison of the six questionnaires based on their coverage of UX dimensions.
Publisher: Georg Thieme Verlag KG
Date: 25-04-2019
Abstract: Introduction: Whilst general artificial intelligence (AI) is yet to appear, today’s narrow AI is already good enough to transform much of healthcare over the next two decades. Objective: There is much discussion of the potential benefits of AI in healthcare and this paper reviews the cost that may need to be paid for these benefits, including changes in the way healthcare is practiced, patients are engaged, medical records are created, and work is reimbursed. Results: Whilst AI will be applied to classic pattern recognition tasks like diagnosis or treatment recommendation, it is likely to be as disruptive to clinical work as it is to care delivery. Digital scribe systems that use AI to automatically create electronic health records promise great efficiency for clinicians but may lead to potentially very different types of clinical records and workflows. In disciplines like radiology, AI is likely to see image interpretation become an automated process with diminishing human engagement. Primary care is also being disrupted by AI-enabled services that automate triage, along with services such as telemedical consultations. This altered future may necessarily see an economic change where clinicians are increasingly reimbursed for value, and AI is reimbursed at a much lower cost for volume. Conclusion: AI is likely to be associated with some of the biggest changes we will see in healthcare in our lifetime. To fully engage with this change brings promise of the greatest reward. To not engage is to pay the highest price.
Publisher: Springer Science and Business Media LLC
Date: 03-05-2016
Publisher: JMIR Publications Inc.
Date: 11-02-2021
DOI: 10.2196/24572
Abstract: COVID-19 has overwhelmed health systems worldwide. It is important to identify severe cases as early as possible, such that resources can be mobilized and treatment can be escalated. This study aims to develop a machine learning approach for automated severity assessment of COVID-19 based on clinical and imaging data. Clinical data—including demographics, signs, symptoms, comorbidities, and blood test results—and chest computed tomography scans of 346 patients from 2 hospitals in the Hubei Province, China, were used to develop machine learning models for automated severity assessment in diagnosed COVID-19 cases. We compared the predictive power of the clinical and imaging data from multiple machine learning models and further explored the use of four overs ling methods to address the imbalanced classification issue. Features with the highest predictive power were identified using the Shapley Additive Explanations framework. Imaging features had the strongest impact on the model output, while a combination of clinical and imaging features yielded the best performance overall. The identified predictive features were consistent with those reported previously. Although overs ling yielded mixed results, it achieved the best model performance in our study. Logistic regression models differentiating between mild and severe cases achieved the best performance for clinical features (area under the curve [AUC] 0.848 sensitivity 0.455 specificity 0.906), imaging features (AUC 0.926 sensitivity 0.818 specificity 0.901), and a combination of clinical and imaging features (AUC 0.950 sensitivity 0.764 specificity 0.919). The synthetic minority overs ling method further improved the performance of the model using combined features (AUC 0.960 sensitivity 0.845 specificity 0.929). Clinical and imaging features can be used for automated severity assessment of COVID-19 and can potentially help triage patients with COVID-19 and prioritize care delivery to those at a higher risk of severe disease.
Publisher: JMIR Publications Inc.
Date: 27-04-2018
Abstract: ontext-aware systems, also known as context-sensitive systems, are computing applications designed to capture, interpret, and use contextual information and provide adaptive services according to the current context of use. Context-aware systems have the potential to support patients with chronic conditions however, little is known about how such systems have been utilized to facilitate patient work. his study aimed to characterize the different tasks and contexts in which context-aware systems for patient work were used as well as to assess any existing evidence about the impact of such systems on health-related process or outcome measures. total of 6 databases (MEDLINE, EMBASE, CINAHL, ACM Digital, Web of Science, and Scopus) were scanned using a predefined search strategy. Studies were included in the review if they focused on patients with chronic conditions, involved the use of a context-aware system to support patients’ health-related activities, and reported the evaluation of the systems by the users. Studies were screened by independent reviewers, and a narrative synthesis of included studies was conducted. he database search retrieved 1478 citations 6 papers were included, all published from 2009 onwards. The majority of the papers were quasi-experimental and involved pilot and usability testing with a small number of users there were no randomized controlled trials (RCTs) to evaluate the efficacy of a context-aware system. In the included studies, context was captured using sensors or self-reports, sometimes involving both. Most studies used a combination of sensor technology and mobile apps to deliver personalized feedback. A total of 3 studies examined the impact of interventions on health-related measures, showing positive results. he use of context-aware systems to support patient work is an emerging area of research. RCTs are needed to evaluate the effectiveness of context-aware systems in improving patient work, self-management practices, and health outcomes in chronic disease patients.
Publisher: BMJ
Date: 12-05-2010
Abstract: Interruptions and multitasking are implicated as a major cause of clinical inefficiency and error. The aim was to measure the association between emergency doctors' rates of interruption and task completion times and rates. The authors conducted a prospective observational time and motion study in the emergency department of a 400-bed teaching hospital. Forty doctors (91% of medical staff) were observed for 210.45 h on weekdays. The authors calculated the time on task (TOT) the relationship between TOT and interruptions and the proportion of time in work task categories. Length-biased s ling was controlled for. Doctors were interrupted 6.6 times/h. 11% of all tasks were interrupted, 3.3% more than once. Doctors multitasked for 12.8% of time. The mean TOT was 1:26 min. Interruptions were associated with a significant increase in TOT. However, when length-biased s ling was accounted for, interrupted tasks were unexpectedly completed in a shorter time than uninterrupted tasks. Doctors failed to return to 18.5% (95% CI 15.9% to 21.1%) of interrupted tasks. It appears that in busy interrupt-driven clinical environments, clinicians reduce the time they spend on clinical tasks if they experience interruptions, and may delay or fail to return to a significant portion of interrupted tasks. Task shortening may occur because interrupted tasks are truncated to 'catch up' for lost time, which may have significant implications for patient safety.
Publisher: Elsevier BV
Date: 02-2016
DOI: 10.1016/J.JBI.2015.12.016
Abstract: To introduce and evaluate a method that uses electronic medical record (EMR) data to measure the effects of computer system downtime on clinical processes associated with pathology testing and results reporting. A matched case-control design was used to examine the effects of five downtime events over 11-months, ranging from 5 to 300min. Four indicator tests representing different laboratory workflows were selected to measure delays and errors: potassium, haemoglobon, troponin and activated partial thromboplastin time. Tests exposed to a downtime were matched to tests during unaffected control periods by test type, time of day and day of week. Measures included clinician read time (CRT), laboratory turnaround time (LTAT), and rates of missed reads, futile searches, duplicate orders, and missing test results. The effects of downtime varied with the type of IT problem. When clinicians could not logon to a results reporting system for 17-min, the CRT for potassium and haemoglobon tests was five (10.3 vs. 2.0days) and six times (13.4 vs. 2.1days) longer than control (p=0.01-0.04 p=0.0001-0.003). Clinician follow-up of tests was also delayed by another downtime involving a power outage with a small effect. In contrast, laboratory processing of troponin tests was unaffected by network services and routing problems. Errors including missed reads, futile searches, duplicate orders and missing test results could not be examined because the s le size of affected tests was not sufficient for statistical testing. This study demonstrates the feasibility of using routinely collected EMR data with a matched case-control design to measure the effects of downtime on clinical processes. Even brief system downtimes may impact patient care. The methodology has potential to be applied to other clinical processes with established workflows where tasks are pre-defined such as medications management.
Publisher: Elsevier BV
Date: 12-2016
DOI: 10.1016/J.JCLINEPI.2016.07.010
Abstract: To characterize the conclusions and production of nonsystematic reviews about neuraminidase inhibitors relative to financial competing interests held by the authors. We searched for articles about neuraminidase inhibitors and influenza (January 2005 to April 2015), identifying nonsystematic reviews and grading them according to the favorable/nonfavorable presentation of evidence on safety and efficacy. We recorded financial competing interests disclosed in the reviews and from other articles written by their authors. We measured associations between competing interests, author productivity, and conclusions. Among 213 nonsystematic reviews, 138 (65%) presented favorable conclusions. Financial competing interests were identified for 26% (137/532) of authors 51% (108/213) of reviews were associated with a financial competing interest. Reviews produced exclusively by authors with financial competing interests (33% 71/213) were more likely to present favorable conclusions than reviews with no competing interests (risk ratio 1.27 95% confidence interval 1.03-1.55). Authors with financial competing interests published more articles about neuraminidase inhibitors than their counterparts. Half of nonsystematic reviews about neuraminidase inhibitors included an author with a financial competing interest. Reviews produced exclusively by these authors were more likely to present favorable conclusions, and authors with financial competing interests published a greater number of reviews.
Publisher: Springer Science and Business Media LLC
Date: 20-12-2022
DOI: 10.1038/S41598-022-26492-5
Abstract: Mass community testing is a critical means for monitoring the spread of the COVID-19 pandemic. Polymerase chain reaction (PCR) is the gold standard for detecting the causative coronavirus 2 (SARS-CoV-2) but the test is invasive, test centers may not be readily available, and the wait for laboratory results can take several days. Various machine learning based alternatives to PCR screening for SARS-CoV-2 have been proposed, including cough sound analysis. Cough classification models appear to be a robust means to predict infective status, but collecting reliable PCR confirmed data for their development is challenging and recent work using unverified crowdsourced data is seen as a viable alternative. In this study, we report experiments that assess cough classification models trained (i) using data from PCR-confirmed COVID subjects and (ii) using data of in iduals self-reporting their infective status. We compare performance using PCR-confirmed data. Models trained on PCR-confirmed data perform better than those trained on patient-reported data. Models using PCR-confirmed data also exploit more stable predictive features and converge faster. Crowd-sourced cough data is less reliable than PCR-confirmed data for developing predictive models for COVID-19, and raises concerns about the utility of patient reported outcome data in developing other clinical predictive models when better gold-standard data are available.
Publisher: IEEE
Date: 07-2020
Publisher: AMPCo
Date: 04-2012
DOI: 10.5694/MJA12.10475
Publisher: BMJ
Date: 02-2021
DOI: 10.1136/BMJHCI-2020-100251
Abstract: Machine learning algorithms are being used to screen and diagnose disease, prognosticate and predict therapeutic responses. Hundreds of new algorithms are being developed, but whether they improve clinical decision making and patient outcomes remains uncertain. If clinicians are to use algorithms, they need to be reassured that key issues relating to their validity, utility, feasibility, safety and ethical use have been addressed. We propose a checklist of 10 questions that clinicians can ask of those advocating for the use of a particular algorithm, but which do not expect clinicians, as non-experts, to demonstrate mastery over what can be highly complex statistical and computational concepts. The questions are: (1) What is the purpose and context of the algorithm? (2) How good were the data used to train the algorithm? (3) Were there sufficient data to train the algorithm? (4) How well does the algorithm perform? (5) Is the algorithm transferable to new clinical settings? (6) Are the outputs of the algorithm clinically intelligible? (7) How will this algorithm fit into and complement current workflows? (8) Has use of the algorithm been shown to improve patient care and outcomes? (9) Could the algorithm cause patient harm? and (10) Does use of the algorithm raise ethical, legal or social concerns? We provide ex les where an algorithm may raise concerns and apply the checklist to a recent review of diagnostic imaging applications. This checklist aims to assist clinicians in assessing algorithm readiness for routine care and identify situations where further refinement and evaluation is required prior to large-scale use.
Publisher: Springer Science and Business Media LLC
Date: 06-02-2013
Publisher: Oxford University Press (OUP)
Date: 05-04-2022
Abstract: While families have a central role in shaping in idual choices and behaviors, healthcare largely focuses on treating in iduals or supporting self-care. However, a family is also a health unit. We argue that family informatics is a necessary evolution in scope of health informatics. To deal with the needs of in iduals, we must ensure technologies account for the role of their families and may require new classes of digital service. Social networks can help conceptualize the structure, composition, and behavior of families. A family network can be seen as a multiagent system with distributed cognition. Digital tools can address family needs in (1) sensing and monitoring (2) communicating and sharing (3) deciding and acting and (4) treating and preventing illness. Family informatics is inherently multidisciplinary and has the potential to address unresolved chronic health challenges such as obesity, mental health, and substance abuse, support acute health challenges, and to improve the capacity of in iduals to manage their own health needs.
Publisher: Springer Science and Business Media LLC
Date: 02-2009
Publisher: SAGE Publications
Date: 11-12-2018
Abstract: We identify and describe nine key, short-term, challenges to help healthcare organizations, health information technology developers, researchers, policymakers, and funders focus their efforts on health information technology–related patient safety. Categorized according to the stage of the health information technology lifecycle where they appear, these challenges relate to (1) developing models, methods, and tools to enable risk assessment (2) developing standard user interface design features and functions (3) ensuring the safety of software in an interfaced, network-enabled clinical environment (4) implementing a method for unambiguous patient identification (1–4 Design and Development stage) (5) developing and implementing decision support which improves safety (6) identifying practices to safely manage information technology system transitions (5 and 6 Implementation and Use stage) (7) developing real-time methods to enable automated surveillance and monitoring of system performance and safety (8) establishing the cultural and legal framework/safe harbor to allow sharing information about hazards and adverse events and (9) developing models and methods for consumers atients to improve health information technology safety (7–9 Monitoring, Evaluation, and Optimization stage). These challenges represent key “to-do’s” that must be completed before we can expect to have safe, reliable, and efficient health information technology–based systems required to care for patients.
Publisher: Springer Science and Business Media LLC
Date: 04-01-2018
Publisher: Oxford University Press (OUP)
Date: 15-09-2015
DOI: 10.1093/JAMIA/OCV110
Abstract: Objective To develop a predictive model for real-time predictions of length of stay, mortality, and readmission for hospitalized patients using electronic health records (EHRs). Materials and Methods A Bayesian Network model was built to estimate the probability of a hospitalized patient being “at home,” in the hospital, or dead for each of the next 7 days. The network utilizes patient-specific administrative and laboratory data and is updated each time a new pathology test result becomes available. Electronic health records from 32 634 patients admitted to a Sydney metropolitan hospital via the emergency department from July 2008 through December 2011 were used. The model was tested on 2011 data and trained on the data of earlier years. Results The model achieved an average daily accuracy of 80% and area under the receiving operating characteristic curve (AUROC) of 0.82. The model’s predictive ability was highest within 24 hours from prediction (AUROC = 0.83) and decreased slightly with time. Death was the most predictable outcome with a daily average accuracy of 93% and AUROC of 0.84. Discussion We developed the first non–disease-specific model that simultaneously predicts remaining days of hospitalization, death, and readmission as part of the same outcome. By providing a future daily probability for each outcome class, we enable the visualization of future patient trajectories. Among these, it is possible to identify trajectories indicating expected discharge, expected continuing hospitalization, expected death, and possible readmission. Conclusions Bayesian Networks can model EHRs to provide real-time forecasts for patient outcomes, which provide richer information than traditional independent point predictions of length of stay, death, or readmission, and can thus better support decision making.
Publisher: Elsevier BV
Date: 2003
DOI: 10.1016/S1386-5056(02)00046-1
Abstract: To investigate factors influencing variations in clinicians' use of an online evidence retrieval system. Public hospitals in New South Wales, Australia. Web log analysis demonstrated considerable variation in rates of evidence use by clinicians at different hospitals. Focus groups and interviews were held with 61 staff from three hospitals, two with high rates of use and one with a low rate of use, to explore variation in evidence use. Differences between hospitals' and professional groups' (doctors, nurses and allied health) use of online evidence could be explained by organizational, professional and cultural factors. These included the presence of ch ions, organizational cultures which supported evidence-based practice (EBP), and database searching skills of in idual clinicians. Staff shortages, ease of access and time taken to use the online evidence system were cited as barriers to use at the low use site, but no objective differences in these measures were found between the high and low use sites. Social and cultural factors were found to be better discriminators of high and low evidence use than technical factors.
Publisher: SAGE Publications
Date: 29-08-2020
Abstract: To inform the development of automated summarization of clinical conversations, this study sought to estimate the proportion of doctor-patient communication in general practice (GP) consultations used for generating a consultation summary. Two researchers with a medical degree read the transcripts of 44 GP consultations and highlighted the phrases to be used for generating a summary of the consultation. For all consultations, less than 20% of all words in the transcripts were needed for inclusion in the summary. On average, 9.1% of all words in the transcripts, 26.6% of all medical terms, and 27.3% of all speaker turns were highlighted. The results indicate that communication content used for generating a consultation summary makes up a small portion of GP consultations, and automated summarization solutions—such as digital scribes—must focus on identifying the 20% relevant information for automatically generating consultation summaries.
Publisher: Springer Science and Business Media LLC
Date: 21-07-2017
Publisher: Oxford University Press (OUP)
Date: 21-11-2004
DOI: 10.1197/JAMIA.M1480
Publisher: Elsevier BV
Date: 03-2022
Publisher: Springer Berlin Heidelberg
Date: 2001
Publisher: Springer Science and Business Media LLC
Date: 2023
Publisher: Oxford University Press (OUP)
Date: 26-08-2020
Abstract: The study sought to understand the potential roles of a future artificial intelligence (AI) documentation assistant in primary care consultations and to identify implications for doctors, patients, healthcare system, and technology design from the perspective of general practitioners. Co-design workshops with general practitioners were conducted. The workshops focused on (1) understanding the current consultation context and identifying existing problems, (2) ideating future solutions to these problems, and (3) discussing future roles for AI in primary care. The workshop activities included affinity diagramming, brainwriting, and video prototyping methods. The workshops were audio-recorded and transcribed verbatim. Inductive thematic analysis of the transcripts of conversations was performed. Two researchers facilitated 3 co-design workshops with 16 general practitioners. Three main themes emerged: professional autonomy, human-AI collaboration, and new models of care. Major implications identified within these themes included (1) concerns with medico-legal aspects arising from constant recording and accessibility of full consultation records, (2) future consultations taking place out of the exam rooms in a distributed system involving empowered patients, (3) human conversation and empathy remaining the core tasks of doctors in any future AI-enabled consultations, and (4) questioning the current focus of AI initiatives on improved efficiency as opposed to patient care. AI documentation assistants will likely to be integral to the future primary care consultations. However, these technologies will still need to be supervised by a human until strong evidence for reliable autonomous performance is available. Therefore, different human-AI collaboration models will need to be designed and evaluated to ensure patient safety, quality of care, doctor safety, and doctor autonomy.
Publisher: JMIR Publications Inc.
Date: 07-11-2019
DOI: 10.2196/15360
Abstract: The personalization of conversational agents with natural language user interfaces is seeing increasing use in health care applications, shaping the content, structure, or purpose of the dialogue between humans and conversational agents. The goal of this systematic review was to understand the ways in which personalization has been used with conversational agents in health care and characterize the methods of its implementation. We searched on PubMed, Embase, CINAHL, PsycInfo, and ACM Digital Library using a predefined search strategy. The studies were included if they: (1) were primary research studies that focused on consumers, caregivers, or health care professionals (2) involved a conversational agent with an unconstrained natural language interface (3) tested the system with human subjects and (4) implemented personalization features. The search found 1958 publications. After abstract and full-text screening, 13 studies were included in the review. Common ex les of personalized content included feedback, daily health reports, alerts, warnings, and recommendations. The personalization features were implemented without a theoretical framework of customization and with limited evaluation of its impact. While conversational agents with personalization features were reported to improve user satisfaction, user engagement and dialogue quality, the role of personalization in improving health outcomes was not assessed directly. Most of the studies in our review implemented the personalization features without theoretical or evidence-based support for them and did not leverage the recent developments in other domains of personalization. Future research could incorporate personalization as a distinct design factor with a more careful consideration of its impact on health outcomes and its implications on patient safety, privacy, and decision-making.
Publisher: Elsevier BV
Date: 03-2003
DOI: 10.1016/S1386-5056(02)00106-5
Abstract: This paper presents a framework for the design of interactions between human and computational agents working in organisations, mediation by technological systems. The design of interactions within an organisation is viewed from the point of view, not of the technology mediating the new interaction, but of the human and computational agents who interact with each other. Understanding the limits to in idual agent resources permits an analysis of the impact that a new interaction will have in a given setting. When we look beyond simple interaction settings, we can use the notion of interaction equilibria to predict the impact of new information and communication technologies within an organisation. Economic supply and demand curves, for ex le, may allow us to make both qualitative and quantitative predictions about technological adoption of communication systems. Rather than focusing solely on characteristics of in idual technologies, or psychological and social issues, these can be combined to explain the overall decisions that in iduals make when using technologies. Without necessarily understanding all the local decision criteria used by any in idual, we can make robust predictions about how a group as a whole will interact.
Publisher: Association for Computing Machinery (ACM)
Date: 30-04-2023
DOI: 10.1145/3589961
Publisher: Oxford University Press (OUP)
Date: 09-10-2023
Publisher: Elsevier BV
Date: 03-2015
DOI: 10.1016/J.IJMEDINF.2014.12.003
Abstract: To analyse patient safety events associated with England's national programme for IT (NPfIT). Retrospective analysis of all safety events managed by a dedicated IT safety team between September 2005 and November 2011 was undertaken. Events were reviewed against an existing classification for problems associated with IT. The proportion of reported events per problem type, consequences, source of report, resolution within 24h, time of day and day of week were examined. Sub-group analyses were undertaken for events involving patient harm and those that occurred on a large scale. Of the 850 events analysed, 68% (n=574) described potentially hazardous circumstances, 24% (n=205) had an observable impact on care delivery, 4% (n=36) were a near miss, and 3% (n=22) were associated with patient harm, including three deaths (0·35%). Eleven events did not have a noticeable consequence (1%) and two were complaints (<1%). Amongst the events 1606 separate contributing problems were identified. Of these 92% were predominately associated with technical rather than human factors. Problems involving human factors were four times as likely to result in patient harm than technical problems (25% versus 8% OR 3·98, 95%CI 1·90-8.34). Large-scale events affecting 10 or more in iduals or multiple IT systems accounted for 23% (n=191) of the s le and were significantly more likely to result in a near miss (6% versus 4%) or impact the delivery of care (39% versus 20% p<0·001). Events associated with NPfIT reinforce that the use of IT does create hazardous circumstances and can lead to patient harm or death. Large-scale patient safety events have the potential to affect many patients and clinicians, and this suggests that addressing them should be a priority for all major IT implementations.
Publisher: Informa UK Limited
Date: 1991
DOI: 10.3109/14639239109067655
Abstract: A user and dialogue modelling approach is proposed for the development of user interfaces for intelligent patient monitoring systems. Illustrative models and dialogues are developed and simple ex les of user interfaces for a monitor system based upon these are presented. The user model and dialogue method is also used to evaluate some interface techniques from the literature.
Publisher: Georg Thieme Verlag KG
Date: 2011
DOI: 10.3414/ME11-02-0003
Abstract: Objective: To examine the problem of studying interruption in healthcare. Methods: Review of the interruption literature from psychology, human-computer interaction experimental studies of electronic prescribing and error behaviour observational studies in emergency and intensive care. Results: Primary task and interruption variables which contribute to the outcomes of an interruption include the type of task (primary and interrupting task) point of interruption duration of interruption similarity of interruptive task to primary task modality of interruption environmental cues and interruption handling strategy. Effects of interruption on task performance can be examined by measuring errors, the time on task, interruption lag and resumption lag. Conclusions: Interruptions are a complex phenomenon where multiple variables including the characteristics of primary tasks, the interruptions themselves, and the environment may influence patient safety and work-flow outcomes. Observational studies present significant challenges for recording many of the process variables that influence the effects of interruptions. Controlled experiments provide an opportunity to examine the specific effects of variables on errors and efficiency. Computational models can be used to identify the situations in which interruptions to clinical tasks could be disruptive and to investigate the aggregate effects of interruptions.
Publisher: Oxford University Press (OUP)
Date: 21-11-2004
DOI: 10.1197/JAMIA.M1471
Publisher: Elsevier BV
Date: 05-2013
DOI: 10.1016/J.IJMEDINF.2012.12.002
Abstract: Implementation of efficient, universally applied, computer to computer communications is a high priority for many national health systems. As a consequence, much effort has been channelled into finding ways in which a patient's previous medical history can be made accessible when needed. A number of countries have attempted to share patients' records, with varying degrees of success. While most efforts to create record-sharing architectures have relied upon government-provided strategy and funding, New Zealand has taken a different approach. Like most British Commonwealth nations, New Zealand has a 'hybrid' publicly rivately funded health system. However its information technology infrastructure and automation has largely been developed by the private sector, working closely with regional and central government agencies. Currently the sector is focused on finding ways in which patient records can be shared amongst providers across three different regions. New Zealand's healthcare IT model combines government contributed funding, core infrastructure, facilitation and leadership with private sector investment and skills and is being delivered via a set of controlled experiments. The net result is a 'Middle Out' approach to healthcare automation. 'Middle Out' relies upon having a clear, well-articulated health-reform strategy and a determination by both public and private sector organisations to implement useful healthcare IT solutions by working closely together.
Publisher: SAGE Publications
Date: 12-05-2022
Publisher: Elsevier BV
Date: 10-2019
DOI: 10.1016/J.JBI.2019.103288
Abstract: Bluetooth low energy (BLE) beacons have been used to track the locations of in iduals in indoor environments for clinical applications such as workflow analysis and infectious disease modelling. Most current approaches use the received signal strength indicator (RSSI) to track locations. When using the RSSI to track indoor locations, devices need to be calibrated to account for complex interference patterns, which is a laborious process. Our aim was to investigate an alternative method for indoor location tracking of a moving user using BLE beacons in dynamic indoor environments. We developed a new method based on the received number of signals indicator (RNSI) and compared it to a standard RSSI-based method for predicting a user's location. Experiments were performed in an office environment and a tertiary hospital. Both RNSI and RSSI were compared at various distances from BLE beacons. In moving user experiments, a user wearing a beacon walked from one location to another based on a pre-defined route. Performance in predicting user locations was measured based on accuracy. RNSI values decreased substantially with distance from the BLE beacon than RSSI values. Moving user experiments in the office environment demonstrated that the RNSI-based method produced higher accuracy (80.0%) than the RSSI-based method (76.2%). In the hospital, where the environment may introduce signal quality problems due to increased signal interference, the RNSI-based method still outperformed (83.3%) the RSSI-based method (51.9%). Our results suggest that the RNSI-based method could be useful to track the locations of a moving user without involving complex calibration, especially when deploying within a new environment. RNSI has the potential to be used together with other methods in more robust indoor positioning systems.
Publisher: Mary Ann Liebert Inc
Date: 07-2023
Publisher: BMJ
Date: 13-05-2004
Publisher: Oxford University Press (OUP)
Date: 2011
Publisher: Springer Science and Business Media LLC
Date: 21-06-2022
DOI: 10.1186/S12875-022-01763-2
Abstract: Government-subsidised general practice management plans (GPMPs) facilitate chronic disease management however, impact on cardiovascular disease (CVD) is unknown. We aimed to determine utilisation and impact of GPMPs for people with or at elevated risk of CVD. Secondary analysis of baseline data from the CONNECT randomised controlled trial linked to Medicare Benefits Schedule (MBS) and Pharmaceutical Benefits Scheme (PBS) claims. Multivariate regression examining the association of GPMP receipt and review with: (1) ≥ 1 MBS-subsidised allied health visit in the previous 24 months (2) adherence to dual cardioprotective medication (≥ 80% of days covered with a dispensed PBS prescription) and (3) meeting recommended LDL-cholesterol and blood pressure (BP) targets concurrently. Overall, 905 trial participants from 24 primary health care services consented to data linkage. Participants with a GPMP (46.6%, 422/905) were older (69.4 vs 66.0 years), had lower education (32.3% vs 24.7% high school or lower), lower household income (27.5% vs 17.0% in lowest bracket), and more comorbidities, particularly diabetes (42.2% vs 17.6%) compared to those without a GPMP. After adjustment, a GPMP was strongly associated with allied health visits (odds ratio (OR) 14.80, 95% CI: 9.08–24.11) but not higher medication adherence rates (OR 0.82, 95% CI: 0.52–1.29) nor meeting combined LDL and BP targets (OR 1.31, 95% CI: 0.72–2.38). Minor differences in significant covariates were noted in models using GPMP review versus GPMP initiation. In people with or at elevated risk of CVD, GPMPs are under-utilised overall. They are targeting high-needs populations and facilitate allied health access, but are not associated with improved CVD risk management, which represents an opportunity for enhancing their value in supporting guideline-recommended care.
Publisher: Elsevier BV
Date: 2005
DOI: 10.1016/J.IJMEDINF.2004.10.003
Abstract: Clinicians have many unanswered questions during clinical encounters which may impact on the quality and outcomes of decisions made. Provision of online evidence at the point of care is one strategy that provides clinicians with easy access to up-to-date evidence in clinical settings to support evidence-based decision-making. To determine if and when general practitioners use an online evidence system in routine clinical practice, the type of questions for which clinicians seek evidence and the extent to which the system provides clinically useful answers. A prospective cohort study which involved a 4-week clinical trial of Quick Clinical, an online evidence system specifically designed around the needs of general practitioners. Two hundred and twenty-seven clinicians who had a computer with Internet access in their consulting rooms. Computer logs and survey analysis. One hundred and ninety-three general practitioners used the online evidence system to conduct on average 8.7 searches/month. The majority of these (81%) were conducted from consulting rooms and carried out between 9a.m. and 7p.m. (83%). The most frequent searches conducted related to diagnosis (40%) and treatment (35%). 83% of clinicians believed that Quick Clinical (QC) had the potential to improve patient care, and one in four users reported direct experience of improvements in care. In 73% of queries with clinician feedback participants reported that they were able to find clinically useful information during their routine work. General practitioners will use an online evidence retrieval system in routine practice, and report that its use improves the quality of patient care.
Publisher: JMIR Publications Inc.
Date: 08-12-2020
DOI: 10.2196/19991
Abstract: Smartphone apps, fitness trackers, and online social networks have shown promise in weight management and physical activity interventions. However, there are knowledge gaps in identifying the most effective and engaging interventions and intervention features preferred by their users. This 6-month pilot study on a social networking mobile app connected to wireless weight and activity tracking devices has 2 main aims: to evaluate changes in BMI, weight, and physical activity levels in users from different BMI categories and to assess user perspectives on the intervention, particularly on social comparison and automated self-monitoring and feedback features. This was a mixed methods study involving a one-arm, pre-post quasi-experimental pilot with postintervention interviews and focus groups. Healthy young adults used a social networking mobile app intervention integrated with wireless tracking devices (a weight scale and a physical activity tracker) for 6 months. Quantitative results were analyzed separately for 2 groups—underweight-normal and overweight-obese BMI—using t tests and Wilcoxon sum rank, Wilcoxon signed rank, and chi-square tests. Weekly BMI change in participants was explored using linear mixed effects analysis. Interviews and focus groups were analyzed inductively using thematic analysis. In total, 55 participants were recruited (mean age of 23.6, SD 4.6 years 28 women) and 45 returned for the final session (n=45, 82% retention rate). There were no differences in BMI from baseline to postintervention (6 months) and between the 2 BMI groups. However, at 4 weeks, participants’ BMI decreased by 0.34 kg/m2 (P .001), with a loss of 0.86 kg/m2 in the overweight-obese group (P=.01). Participants in the overweight-obese group used the app significantly less compared with in iduals in the underweight-normal BMI group, as they mentioned negative feelings and demotivation from social comparison, particularly from upward comparison with fitter people. Participants in the underweight-normal BMI group were avid users of the app’s self-monitoring and feedback (P=.02) and social (P=.04) features compared with those in the overweight-obese group, and they significantly increased their daily step count over the 6-month study duration by an average of 2292 steps (95% CI 898-3370 P .001). Most participants mentioned a desire for a more personalized intervention. This study shows the effects of different interventions on participants from higher and lower BMI groups and different perspectives regarding the intervention, particularly with respect to its social features. Participants in the overweight-obese group did not sustain a short-term decrease in their BMI and mentioned negative emotions from app use, while participants in the underweight-normal BMI group used the app more frequently and significantly increased their daily step count. These differences highlight the importance of intervention personalization. Future research should explore the role of personalized features to help overcome personal barriers and better match in idual preferences and needs.
Publisher: JMIR Publications Inc.
Date: 15-07-2021
DOI: 10.2196/25992
Abstract: The experiences of patients change throughout their illness trajectory and differ according to their medical history, but digital support tools are often designed for one specific moment in time and do not change with the patient as their health state changes. This presents a fragmented support pattern where patients have to move from one app to another as they move between health states, and some subpopulations of patients do not have their needs addressed at all. This study aims to investigate how patient work evolves over time for those living with type 2 diabetes mellitus and chronic multimorbidity, and explore the implications for digital support system design. In total, 26 patients with type 2 diabetes mellitus and chronic multimorbidity were recruited. Each interview was conducted twice, and interviews were transcribed and analyzed according to the Chronic Illness Trajectory Model. Four unique illness trajectories were identified with different patient work goals and needs: living with stable chronic conditions involves patients seeking to make patient work as routinized and invisible as possible dealing with cycles of acute or crisis episodes included heavily multimorbid patients who sought support with therapy adherence responding to unstable changes described patients currently experiencing rapid health changes and increasing patient work intensity and coming back from crisis focused on patients coping with a loss of normalcy. Patient work changes over time based on the experiences of the in idual, and its timing and trajectory need to be considered when designing digital support interventions. RR2-10.1136/bmjopen-2018-022163
Publisher: Elsevier BV
Date: 12-1992
Publisher: Georg Thieme Verlag KG
Date: 2019
Abstract: Objective Clinicians using clinical decision support (CDS) to prescribe medications have an obligation to ensure that prescriptions are safe. One option is to verify the safety of prescriptions if there is uncertainty, for ex le, by using drug references. Supervisory control experiments in aviation and process control have associated errors, with reduced verification arising from overreliance on decision support. However, it is unknown whether this relationship extends to clinical decision-making. Therefore, we examine whether there is a relationship between verification behaviors and prescribing errors, with and without CDS medication alerts, and whether task complexity mediates this. Methods A total of 120 students in the final 2 years of a medical degree prescribed medicines for patient scenarios using a simulated electronic prescribing system. CDS (correct, incorrect, and no CDS) and task complexity (low and high) were varied. Outcomes were omission (missed prescribing errors) and commission errors (accepted false-positive alerts). Verification measures were access of drug references and view time percentage of task time. Results Failure to access references for medicines with prescribing errors increased omission errors with no CDS (high-complexity: χ 2(1) = 12.716 p 0.001) and incorrect CDS (Fisher's exact low-complexity: p = 0.002 high-complexity: p = 0.001). Failure to access references for false-positive alerts increased commission errors (low-complexity: χ 2(1) = 16.673, p 0.001 high-complexity: χ 2(1) = 18.690, p 0.001). Fewer participants accessed relevant references with incorrect CDS compared with no CDS (McNemar low-complexity: p 0.001 high-complexity: p 0.001). Lower view time percentages increased omission (F(3, 361.914) = 4.498 p = 0.035) and commission errors (F(1, 346.223) = 2.712 p = 0.045). View time percentages were lower in CDS-assisted conditions compared with unassisted conditions (F(2, 335.743) = 10.443 p 0.001). Discussion The presence of CDS reduced verification of prescription safety. When CDS was incorrect, reduced verification was associated with increased prescribing errors. Conclusion CDS can be incorrect, and verification provides one mechanism to detect errors. System designers need to facilitate verification without increasing workload or eliminating the benefits of correct CDS.
Publisher: Elsevier BV
Date: 09-2004
Publisher: Springer Science and Business Media LLC
Date: 2009
Publisher: Informa UK Limited
Date: 12-04-2019
Publisher: Oxford University Press (OUP)
Date: 09-2013
Publisher: Elsevier BV
Date: 07-2003
DOI: 10.1016/S1386-5056(03)00040-6
Abstract: To describe a model for analysing complex medical decision making tasks and for evaluating their suitability for automation. Assessment of a decision task's complexity in terms of the number of elementary information processes (EIPs) and the potential for cognitive effort reduction through EIP minimisation using an automated decision aid. The model consists of five steps: (1) selection of the domain and relevant tasks (2) evaluation of the knowledge complexity for tasks selected (3) identification of cognitively demanding tasks (4) assessment of unaided and aided effort requirements for this task accomplishment and (5) selection of computational tools to achieve this complexity reduction. The model is applied to the task of antibiotic prescribing in critical care and the most complex components of the task identified. Decision aids to support these components can provide a significant reduction of cognitive effort suggesting this is a decision task worth automating. We view the role of decision support for complex decision to be one of task complexity reduction, and the model described allows for task automation without lowering decision quality and can assist decision support systems developers.
Publisher: Oxford University Press (OUP)
Date: 11-08-2016
DOI: 10.1093/JAMIA/OCW105
Abstract: Introduction: While potentially reducing decision errors, decision support systems can introduce new types of errors. Automation bias (AB) happens when users become overreliant on decision support, which reduces vigilance in information seeking and processing. Most research originates from the human factors literature, where the prevailing view is that AB occurs only in multitasking environments. Objectives: This review seeks to compare the human factors and health care literature, focusing on the apparent association of AB with multitasking and task complexity. Data sources: EMBASE, Medline, Compendex, Inspec, IEEE Xplore, Scopus, Web of Science, PsycINFO, and Business Source Premiere from 1983 to 2015. Study selection: Evaluation studies where task execution was assisted by automation and resulted in errors were included. Participants needed to be able to verify automation correctness and perform the task manually. Methods: Tasks were identified and grouped. Task and automation type and presence of multitasking were noted. Each task was rated for its verification complexity. Results: Of 890 papers identified, 40 met the inclusion criteria 6 were in health care. Contrary to the prevailing human factors view, AB was found in single tasks, typically involving diagnosis rather than monitoring, and with high verification complexity. Limitations: The literature is fragmented, with large discrepancies in how AB is reported. Few studies reported the statistical significance of AB compared to a control condition. Conclusion: AB appears to be associated with the degree of cognitive load experienced in decision tasks, and appears to not be uniquely associated with multitasking. Strategies to minimize AB might focus on cognitive load reduction.
Publisher: JMIR Publications Inc.
Date: 09-02-2020
DOI: 10.2196/15823
Abstract: Conversational agents (CAs) are systems that mimic human conversations using text or spoken language. Their widely used ex les include voice-activated systems such as Apple Siri, Google Assistant, Amazon Alexa, and Microsoft Cortana. The use of CAs in health care has been on the rise, but concerns about their potential safety risks often remain understudied. This study aimed to analyze how commonly available, general-purpose CAs on smartphones and smart speakers respond to health and lifestyle prompts (questions and open-ended statements) by examining their responses in terms of content and structure alike. We followed a piloted script to present health- and lifestyle-related prompts to 8 CAs. The CAs’ responses were assessed for their appropriateness on the basis of the prompt type: responses to safety-critical prompts were deemed appropriate if they included a referral to a health professional or service, whereas responses to lifestyle prompts were deemed appropriate if they provided relevant information to address the problem prompted. The response structure was also examined according to information sources (Web search–based or precoded), response content style (informative and/or directive), confirmation of prompt recognition, and empathy. The 8 studied CAs provided in total 240 responses to 30 prompts. They collectively responded appropriately to 41% (46/112) of the safety-critical and 39% (37/96) of the lifestyle prompts. The ratio of appropriate responses deteriorated when safety-critical prompts were rephrased or when the agent used a voice-only interface. The appropriate responses included mostly directive content and empathy statements for the safety-critical prompts and a mix of informative and directive content for the lifestyle prompts. Our results suggest that the commonly available, general-purpose CAs on smartphones and smart speakers with unconstrained natural language interfaces are limited in their ability to advise on both the safety-critical health prompts and lifestyle prompts. Our study also identified some response structures the CAs employed to present their appropriate responses. Further investigation is needed to establish guidelines for designing suitable response structures for different prompt types.
Publisher: Elsevier BV
Date: 06-2014
DOI: 10.1016/J.JBI.2014.03.007
Abstract: Gene set enrichment analysis (GSEA) annotates gene microarray data with functional information from the biomedical literature to improve gene-disease association prediction. We hypothesize that supplementing GSEA with comprehensive gene function catalogs built automatically using information extracted from the scientific literature will significantly enhance GSEA prediction quality. Gold standard gene sets for breast cancer (BrCa) and colorectal cancer (CRC) were derived from the literature. Two gene function catalogs (CMeSH and CUMLS) were automatically generated. 1. By using Entrez Gene to associate all recorded human genes with PubMed article IDs. 2. Using the genes mentioned in each PubMed article and associating each with the article's MeSH terms (in CMeSH) and extracted UMLS concepts (in CUMLS). Microarray data from the Gene Expression Omnibus for BrCa and CRC was then annotated using CMeSH and CUMLS and for comparison, also with several pre-existing catalogs (C2, C4 and C5 from the Molecular Signatures Database). Ranking was done using, a standard GSEA implementation (GSEA-p). Gene function predictions for enriched array data were evaluated against the gold standard by measuring area under the receiver operating characteristic curve (AUC). Comparison of ranking using the literature enrichment catalogs, the pre-existing catalogs as well as five randomly generated catalogs show the literature derived enrichment catalogs are more effective. The AUC for BrCa using the unenriched gene expression dataset was 0.43, increasing to 0.89 after gene set enrichment with CUMLS. The AUC for CRC using the unenriched gene expression dataset was 0.54, increasing to 0.9 after enrichment with CMeSH. C2 increased AUC (BrCa 0.76, CRC 0.71) but C4 and C5 performed poorly (between 0.35 and 0.5). The randomly generated catalogs also performed poorly, equivalent to random guessing. Gene set enrichment significantly improved prediction of gene-disease association. Selection of enrichment catalog had a substantial effect on prediction accuracy. The literature based catalogs performed better than the MSigDB catalogs, possibly because they are more recent. Catalogs generated automatically from the literature can be kept up to date. Prediction of gene-disease association is a fundamental task in biomedical research. GSEA provides a promising method when using literature-based enrichment catalogs. The literature based catalogs generated and used in this study are available from www2.chi.unsw.edu.au/literature-enrichment.
Publisher: JMIR Publications Inc.
Date: 06-05-2013
DOI: 10.2196/JMIR.2414
Publisher: Oxford University Press (OUP)
Date: 2009
DOI: 10.1197/JAMIA.M2557
Publisher: Oxford University Press (OUP)
Date: 05-2014
Publisher: BMJ
Date: 28-02-1998
Abstract: An exploratory study to identify patterns of communication behaviour among hospital based healthcare workers. Non-participatory, qualitative observational study. British district general hospital. Eight doctors and two nurses. Communication behaviours resulted in an interruptive workplace, which seemed to contribute to inefficiency in work practice. Medical staff generated twice as many interruptions via telephone and paging systems as they received. Hypothesised causes for this level of interruption include a bias by staff to interruptive communication methods, a tendency to seek information from colleagues in preference to printed materials, and poor provision of information in support of contacting in iduals in specific roles. Staff were observed to infer the intention of messages based on insufficient information, and clinical teams demonstrated complex communication patterns, which could lead to inefficiency. The results suggest a number of improvements to processes or technologies. Staff may need instruction in appropriate use of communication facilities. Further, excessive emphasis on information technology may be misguided since much may be gained by supporting information exchange through communication technology. Voicemail and email with acknowledgment, mobile communication, improved support for role based contact, and message screening may be beneficial in the hospital environment.
Publisher: Association for Computing Machinery (ACM)
Date: 06-2021
Abstract: The current agenda in health personalisation research mainly revolves around supporting lifestyle and wellbeing. Personalised recommendations for patients and consumers have been explored for areas like physical activity, food intake, mental support, and health information consumption. Strikingly little attention has been paid to personalised medical applications supporting clinical users. In this paper, we turn the spotlight on such medical use cases and the advantages personalised decision-support can bring. We discuss the differences between patient- and clinician-centric personalisation and highlight touch points, where personalised support might improve clinicians' decision-making.
Publisher: Springer Science and Business Media LLC
Date: 31-07-2022
DOI: 10.1007/S41999-022-00667-9
Abstract: To assess current evidence comparing the impact of available coronary interventions in frail patients aged 75 years or older with different subtypes of acute coronary syndrome (ACS) on health outcomes. Scopus, Embase and PubMed were systematically searched in May 2022 for studies comparing outcomes between coronary interventions in frail older patients with ACS. Studies were excluded if they provided no objective assessment of frailty during the index admission, under-represented patients aged 75 years or older, or included patients with non-ACS coronary disease without presenting results for the ACS subgroup. Following data extraction from the included studies, a qualitative synthesis of results was undertaken. Nine studies met all eligibility criteria. All eligible studies were observational. Substantial heterogeneity was observed across study designs regarding ACS subtypes included, frailty assessments used, coronary interventions compared, and outcomes studied. All studies were assessed to be at high risk of bias. Notably, adjustment for confounders was limited or not adequately reported in all studies. The comparative assessment suggested a possible efficacy signal for invasive treatment relative to conservative treatment but possibly at the risk of increased bleeding events. There is a paucity of evidence comparing health outcomes between different coronary interventions in frail patients aged 75 years or older with ACS. Available evidence is at high risk of bias. Given the growing importance of ACS in frail patients aged 75 years or older, new studies are needed to inform optimal ACS care for this population. Future studies should rigorously adjust for confounders.
Publisher: BMJ
Date: 11-2019
Publisher: Wiley
Date: 04-11-2001
DOI: 10.1046/J.1445-5994.2001.00102.X
Abstract: The analysis of factors that influence prescribing decisions is increasingly important. Antibiotic use is often based on limited evidence and lack of information about clinical decision-making processes is an important obstacle to improving antibiotic utilization. To compare the attitudes of intensive care unit practitioners (ICUP) and infectious disease practitioners (IDP) to antibiotic use and to the evidence-based information support. A postal survey conducted between March and July 2000 of ICUP and IDP representing all States and Territories in Australia. One hundred and fifty-three of 224 clinicians returned the questionnaire (68.3% response rate). In choosing an antibiotic, IDP placed significantly more weight than ICUP on the in vitro susceptibility of the pathogen (P = 0.001), antibiotic cost (P = 0.05) and possible development of antibiotic resistance (P = 0.007). More than 95% of both groups believed that unit-specific antibiotic susceptibility of endemic pathogens was an essential factor in rational prescribing, but only 68.5% of IDP and 38.7% of ICUP use microbiology laboratory databases. When in doubt about appropriate antibiotic use, 63.8% of ICUP seek and 76.3% usually follow the advice of IDP. Both groups agree that published antibiotic guidelines are useful, but IDP were more likely to consult them. ICUP were more likely to believe that guidelines are used to control clinicians rather than to improve quality of care (P = 0.001). A greater proportion of IDP (71.2%) than ICUP (52.5%) believed that antibiotic prescribing in their intensive care unit (ICU) was evidence based but most (91.8% and 86.9%, respectively) agreed that it should be. Australian clinicians have positive views about evidence-based prescribing and antibiotic guidelines. However, there are clinically significant differences in prescribing behaviour between ICUP and IDP. These may be explained by different disease spectra managed by each group or different cultures, training and/or cognitive styles. Improvements in the understanding of physicians' information and decision support needs are required to strengthen evidence-based prescribing.
Publisher: CRC Press
Date: 31-10-2003
DOI: 10.1201/B13618
Publisher: Wiley
Date: 17-03-2006
DOI: 10.1002/ASI.20377
Publisher: Oxford University Press (OUP)
Date: 18-04-2023
Abstract: To examine the real-world safety problems involving machine learning (ML)-enabled medical devices. We analyzed 266 safety events involving approved ML medical devices reported to the US FDA’s MAUDE program between 2015 and October 2021. Events were reviewed against an existing framework for safety problems with Health IT to identify whether a reported problem was due to the ML device (device problem) or its use, and key contributors to the problem. Consequences of events were also classified. Events described hazards with potential to harm (66%), actual harm (16%), consequences for healthcare delivery (9%), near misses that would have led to harm if not for intervention (4%), no harm or consequences (3%), and complaints (2%). While most events involved device problems (93%), use problems (7%) were 4 times more likely to harm (relative risk 4.2 95% CI 2.5–7). Problems with data input to ML devices were the top contributor to events (82%). Much of what is known about ML safety comes from case studies and the theoretical limitations of ML. We contribute a systematic analysis of ML safety problems captured as part of the FDA’s routine post-market surveillance. Most problems involved devices and concerned the acquisition of data for processing by algorithms. However, problems with the use of devices were more likely to harm. Safety problems with ML devices involve more than algorithms, highlighting the need for a whole-of-system approach to safe implementation with a special focus on how users interact with devices.
Publisher: Oxford University Press (OUP)
Date: 05-2011
Publisher: BMJ
Date: 11-12-2013
DOI: 10.1136/BMJ.F7273
Publisher: Oxford University Press (OUP)
Date: 22-04-2019
DOI: 10.1093/JAMIA/OCZ046
Abstract: The objective of this study is to characterize the dynamic structure of primary care consultations by identifying typical activities and their inter-relationships to inform the design of automated approaches to clinical documentation using natural language processing and summarization methods. This is an observational study in Australian general practice involving 31 consultations with 4 primary care physicians. Consultations were audio-recorded, and computer interactions were recorded using screen capture. Physical interactions in consultation rooms were noted by observers. Brief interviews were conducted after consultations. Conversational transcripts were analyzed to identify different activities and their speech content as well as verbal cues signaling activity transitions. An activity transition analysis was then undertaken to generate a network of activities and transitions. Observed activity classes followed those described in well-known primary care consultation models. Activities were often fragmented across consultations, did not flow necessarily in a defined order, and the flow between activities was nonlinear. Modeling activities as a network revealed that discussing a patient’s present complaint was the most central activity and was highly connected to medical history taking, physical examination, and assessment, forming a highly interrelated bundle. Family history, allergy, and investigation discussions were less connected suggesting less dependency on other activities. Clear verbal signs were often identifiable at transitions between activities. Primary care consultations do not appear to follow a classic linear model of defined information seeking activities rather, they are fragmented, highly interdependent, and can be reactively triggered. The nonlinearity of activities has significant implications for the design of automated information capture. Whereas dictation systems generate literal translation of speech into text, speech-based clinical summary systems will need to link disparate information fragments, merge their content, and abstract coherent information summaries.
Publisher: American College of Physicians
Date: 07-10-2014
DOI: 10.7326/M14-0933
Publisher: Elsevier BV
Date: 2015
DOI: 10.1016/J.JCLINEPI.2014.09.014
Abstract: To examine the use of supervised machine learning to identify biases in evidence selection and determine if citation information can predict favorable conclusions in reviews about neuraminidase inhibitors. Reviews of neuraminidase inhibitors published during January 2005 to May 2013 were identified by searching PubMed. In a blinded evaluation, the reviews were classified as favorable if investigators agreed that they supported the use of neuraminidase inhibitors for prophylaxis or treatment of influenza. Reference lists were used to identify all unique citations to primary articles. Three classification methods were tested for their ability to predict favorable conclusions using only citation information. Citations to 4,574 articles were identified in 152 reviews of neuraminidase inhibitors, and 93 (61%) of these reviews were graded as favorable. Primary articles describing drug resistance were among the citations that were underrepresented in favorable reviews. The most accurate classifier predicted favorable conclusions with 96.2% accuracy, using citations to only 24 of 4,574 articles. Favorable conclusions in reviews about neuraminidase inhibitors can be predicted using only information about the articles they cite. The approach highlights how evidence exclusion shapes conclusions in reviews and provides a method to evaluate citation practices in a corpus of reviews.
Publisher: Oxford University Press (OUP)
Date: 07-2008
DOI: 10.1197/JAMIA.M2663
Publisher: Springer Science and Business Media LLC
Date: 25-04-2018
Publisher: BMJ
Date: 22-05-2013
DOI: 10.1136/BMJ.F3007
Publisher: BMJ
Date: 28-11-1998
Publisher: Oxford University Press (OUP)
Date: 2012
Publisher: Springer Science and Business Media LLC
Date: 16-05-2018
Publisher: Oxford University Press (OUP)
Date: 11-07-2018
DOI: 10.1093/JAMIA/OCY072
Abstract: Our objective was to review the characteristics, current applications, and evaluation measures of conversational agents with unconstrained natural language input capabilities used for health-related purposes. We searched PubMed, Embase, CINAHL, PsycInfo, and ACM Digital using a predefined search strategy. Studies were included if they focused on consumers or healthcare professionals involved a conversational agent using any unconstrained natural language input and reported evaluation measures resulting from user interaction with the system. Studies were screened by independent reviewers and Cohen’s kappa measured inter-coder agreement. The database search retrieved 1513 citations 17 articles (14 different conversational agents) met the inclusion criteria. Dialogue management strategies were mostly finite-state and frame-based (6 and 7 conversational agents, respectively) agent-based strategies were present in one type of system. Two studies were randomized controlled trials (RCTs), 1 was cross-sectional, and the remaining were quasi-experimental. Half of the conversational agents supported consumers with health tasks such as self-care. The only RCT evaluating the efficacy of a conversational agent found a significant effect in reducing depression symptoms (effect size d = 0.44, p = .04). Patient safety was rarely evaluated in the included studies. The use of conversational agents with unconstrained natural language input capabilities for health-related purposes is an emerging field of research, where the few published studies were mainly quasi-experimental, and rarely evaluated efficacy or safety. Future studies would benefit from more robust experimental designs and standardized reporting. The protocol for this systematic review is registered at PROSPERO with the number CRD42017065917.
Publisher: BMJ
Date: 27-05-1995
Publisher: SAGE Publications
Date: 25-06-2018
Abstract: Determine the relationship between cognitive load (CL) and automation bias (AB). Clinical decision support (CDS) for electronic prescribing can improve safety but introduces the risk of AB, where reliance on CDS replaces vigilance in information seeking and processing. We hypothesized high CL generated by high task complexity would increase AB errors. One hundred twenty medical students prescribed medicines for clinical scenarios using a simulated e-prescribing system in a randomized controlled experiment. Quality of CDS (correct, incorrect, and no CDS) and task complexity (low and high) were varied. CL, omission errors (failure to detect prescribing errors), and commission errors (acceptance of false positive alerts) were measured. Increasing complexity from low to high significantly increased CL, F(1, 118) = 71.6, p < .001. CDS reduced CL in high-complexity conditions compared to no CDS, F(2, 117) = 4.72, p = .015. Participants who made omission errors in incorrect and no CDS conditions exhibited lower CL compared to those who did not, F(1, 636.49) = 3.79, p = .023. Results challenge the notion that AB is triggered by increasing task complexity and associated increases in CL. Omission errors were associated with lower CL, suggesting errors may stem from an insufficient allocation of cognitive resources. This is the first research to examine the relationship between CL and AB. Findings suggest designers and users of CDS systems need to be aware of the risks of AB. Interventions that increase user vigilance and engagement may be beneficial and deserve further investigation.
Publisher: Oxford University Press (OUP)
Date: 31-08-2023
Publisher: BMJ
Date: 30-01-2013
Publisher: JMIR Publications Inc.
Date: 25-09-2020
Abstract: OVID-19 has overwhelmed health systems worldwide. It is important to identify severe cases as early as possible, such that resources can be mobilized and treatment can be escalated. his study aims to develop a machine learning approach for automated severity assessment of COVID-19 based on clinical and imaging data. linical data—including demographics, signs, symptoms, comorbidities, and blood test results—and chest computed tomography scans of 346 patients from 2 hospitals in the Hubei Province, China, were used to develop machine learning models for automated severity assessment in diagnosed COVID-19 cases. We compared the predictive power of the clinical and imaging data from multiple machine learning models and further explored the use of four overs ling methods to address the imbalanced classification issue. Features with the highest predictive power were identified using the Shapley Additive Explanations framework. maging features had the strongest impact on the model output, while a combination of clinical and imaging features yielded the best performance overall. The identified predictive features were consistent with those reported previously. Although overs ling yielded mixed results, it achieved the best model performance in our study. Logistic regression models differentiating between mild and severe cases achieved the best performance for clinical features (area under the curve [AUC] 0.848 sensitivity 0.455 specificity 0.906), imaging features (AUC 0.926 sensitivity 0.818 specificity 0.901), and a combination of clinical and imaging features (AUC 0.950 sensitivity 0.764 specificity 0.919). The synthetic minority overs ling method further improved the performance of the model using combined features (AUC 0.960 sensitivity 0.845 specificity 0.929). linical and imaging features can be used for automated severity assessment of COVID-19 and can potentially help triage patients with COVID-19 and prioritize care delivery to those at a higher risk of severe disease.
Publisher: Oxford University Press (OUP)
Date: 11-1996
DOI: 10.1136/JAMIA.1996.97084510
Abstract: The modern study of artificial intelligence in medicine (AIM) is 25 years old. Throughout this period, the field has attracted many of the best computer scientists, and their work represents a remarkable achievement. However, AIM has not been successful-if success is judged as making an impact on the practice of medicine. Much recent work in AIM has been focused inward, addressing problems that are at the crossroads of the parent disciplines of medicine and artificial intelligence. Now, AIM must move forward with the insights that it has gained and focus on finding solutions for problems at the heart of medical practice. The growing emphasis within medicine on evidence-based practice should provide the right environment for that change.
Publisher: Oxford University Press (OUP)
Date: 17-11-2015
DOI: 10.1093/JAMIA/OCV152
Abstract: Objective To review literature assessing the impact of speech recognition (SR) on clinical documentation. Methods Studies published prior to December 2014 reporting clinical documentation using SR were identified by searching Scopus, Compendex and Inspect, PubMed, and Google Scholar. Outcome variables analyzed included dictation and editing time, document turnaround time (TAT), SR accuracy, error rates per document, and economic benefit. Twenty-three articles met inclusion criteria from a pool of 441. Results Most studies compared SR to dictation and transcription (DT) in radiology, and heterogeneity across studies was high. Document editing time increased using SR compared to DT in four of six studies (+1876.47% to –16.50%). Dictation time similarly increased in three of five studies (+91.60% to –25.00%). TAT consistently improved using SR compared to DT (16.41% to 82.34%) across all studies the improvement was 0.90% per year. SR accuracy was reported in ten studies (88.90% to 96.00%) and appears to improve 0.03% per year as the technology matured. Mean number of errors per report increased using SR (0.05 to 6.66) compared to DT (0.02 to 0.40). Economic benefits were poorly reported. Conclusions SR is steadily maturing and offers some advantages for clinical documentation. However, evidence supporting the use of SR is weak, and further investigation is required to assess the impact of SR on documentation error types, rates, and clinical outcomes.
Publisher: Elsevier BV
Date: 04-2010
DOI: 10.3109/00313021003631346
Abstract: To investigate the molecular epidemiology of tuberculosis, temporal and spatial distribution of Mycobacterium tuberculosis isolates and associations between genotypes and clinical characteristics, in a low prevalence population. A total of 930 M. tuberculosis isolates referred to the New South Wales (NSW, Australia) Mycobacterium Reference Laboratory in 2004-2006 were characterised by mycobacterial interspersed repetitive unit (MIRU) and spacer oligonucleotide (spoligo) typing. Associations between genotypes, patient age, disease site and drug resistance were explored and the predictive power of molecular typing was analysed using Bayesian Belief Networks. Among isolates from 855 NSW residents, there were 287 spoligotypes, 494 MIRU types and 643 unique spoligotype-MIRU type combinations. They formed 73 spoligotype, 104 MIRU type and 76 spoligo-MIRU clusters, most of which contained only two isolates. The majority (87.7%) of spoligotype clusters contained several MIRU profiles and 64.4% of MIRU clusters contained several spoligotypes. The three most common M. tuberculosis clades were Beijing (24.1%), East African Indian (11.8%) and Central Asian (6.5%) 6.9% and 0.7% isolates were resistant to isoniazid and rif icin, respectively. There was no proof of association between genotype and drug resistance but isoniazid resistance increased independently over time. Given the low rates of genotype clustering, statistical analysis of genotype-phenotype associations was limited. Potential associations were not confirmed by Bayesian classifiers. Spoligo and MIRU typing demonstrated low levels of M. tuberculosis clustering in NSW temporal and spatial changes in M. tuberculosis genotypes reflected migration patterns to Australia. No analytically significant associations between M. tuberculosis genotypes and clinical phenotypes were detected.
Publisher: AMPCo
Date: 06-08-2019
DOI: 10.5694/MJA2.50294
Publisher: Elsevier BV
Date: 06-2010
Publisher: Oxford University Press (OUP)
Date: 2004
DOI: 10.1197/JAMIA.M1166
Publisher: BMJ
Date: 05-2022
DOI: 10.1136/BMJHCI-2022-100567
Abstract: To explore emergency department (ED) and urgent care (UC) clinicians’ perceptions of digital access to patients’ past medical history (PMH). An online survey compared anticipated and actual value of access to digital PMH. UTAUT2 (Unified Theory of Acceptance and Use of Technology 2) was used to assess technology acceptance. Quantitative data were analysed using Mann-Whitney U tests and qualitative data were analysed using a general inductive approach. 33 responses were received. 94% (16/17) of respondents with PMH access said they valued their PMH system and all respondents with no digital PMH access (100% 16/16) said they believed access would be valuable. Both groups indicated a high level of technology acceptance across all UTAUT2 dimensions. Free-text responses suggested improvements such as increasing the number of patient records available, standardisation of information presentation, increased system reliability, expanded access to information and validation by authoritative/trusted sources. Non-PMH respondents’ expectations were closely matched with the benefits obtained by PMH respondents. High levels of technology acceptance indicated a strong willingness to adopt. Clinicians appeared clear about the improvements they would like for PMH content and access. Policy implications include the need to focus on higher levels of patient participation, and increasing the breadth and depth of information and processes to ensure patient record curation and stewardship. There appears to be strong clinician support for digital access to PMH in ED and UC however, current systems appear to have many shortcomings.
Publisher: BMJ
Date: 04-2009
Publisher: IEEE
Date: 07-2020
Publisher: Oxford University Press (OUP)
Date: 12-1969
Publisher: JMIR Publications Inc.
Date: 11-0066
DOI: 10.2196/16656
Abstract: Having patients self-manage their health conditions is a widely promoted concept, but many patients struggle to practice it effectively. Moreover, few studies have analyzed the nature of work required from patients and how such work fits into the context of their daily life. This study aimed to review the characteristics of patient work in adult patients. Patient work refers to tasks that health conditions impose on patients (eg, taking medications) within a system of contextual factors. A systematic scoping review was conducted using narrative synthesis. Data were extracted from PubMed, Excerpta Medica database (EMBASE), Cumulative Index to Nursing and Allied Health Literature (CINAHL), and PsycINFO, including studies from August 2013 to August 2018. The included studies focused on adult patients and assessed one or more of the following: (1) physical health–related tasks, (2) cognitive health–related tasks, or (3) contextual factors affecting these tasks. Tasks were categorized according to the themes that emerged: (1) if the task is always visible to others or can be cognitive, (2) if the task must be conducted collaboratively or can be conducted alone, and (3) if the task was done with the purpose of creating resources. Contextual factors were grouped according to the level at which they exert influence (micro, meso, or macro) and where they fit in the patient work system (the macroergonomic layer of physical, social, and organizational factors the mesoergonomic layer of household and community and the microergonomic triad of person-task-tools). In total, 67 publications were included, with 58 original research articles and 9 review articles. A variety of patient work tasks were observed, ranging from physical and tangible tasks (such as taking medications and visiting health care professionals) to psychological and social tasks (such as creating coping strategies). Patient work was affected by a range of contextual factors on the micro, meso, or macro levels. Our results indicate that most patient work was done alone, in private, and often imposing cognitive burden with low amounts of support. This review sought to provide insight into the work burden of health management from a patient perspective and how patient context influences such work. For many patients, health-related work is ever present, invisible, and overwhelming. When researchers and clinicians design and implement patient-facing interventions, it is important to understand how the extra work impacts one’s internal state and coping strategy, how such work fits into daily routines, and if these changes could be maintained in the long term.
Publisher: Springer Science and Business Media LLC
Date: 08-09-2009
Publisher: JMIR Publications Inc.
Date: 10-06-2015
DOI: 10.2196/JMIR.4343
Publisher: Routledge
Date: 22-12-2003
Publisher: BMJ
Date: 21-12-2021
DOI: 10.1136/BJSPORTS-2020-102892
Abstract: To determine the effectiveness of physical activity interventions involving mobile applications (apps) or trackers with automated and continuous self-monitoring and feedback. Systematic review and meta-analysis. PubMed and seven additional databases, from 2007 to 2020. Randomised controlled trials in adults (18–65 years old) without chronic illness, testing a mobile app or an activity tracker, with any comparison, where the main outcome was a physical activity measure. Independent screening was conducted. We conducted random effects meta-analysis and all effect sizes were transformed into standardised difference in means (SDM). We conducted exploratory metaregression with continuous and discrete moderators identified as statistically significant in subgroup analyses. Physical activity: daily step counts, min/week of moderate-to-vigorous physical activity, weekly days exercised, min/week of total physical activity, metabolic equivalents. Thirty-five studies met inclusion criteria and 28 were included in the meta-analysis (n=7454 participants, 28% women). The meta-analysis showed a small-to-moderate positive effect on physical activity measures (SDM 0.350, 95% CI 0.236 to 0.465, I 2 =69%, T 2 =0.051) corresponding to 1850 steps per day (95% CI 1247 to 2457). Interventions including text-messaging and personalisation features were significantly more effective in subgroup analyses and metaregression. Interventions using apps or trackers seem to be effective in promoting physical activity. Longer studies are needed to assess the impact of different intervention components on long-term engagement and effectiveness.
Publisher: Elsevier BV
Date: 11-2021
Publisher: Springer Science and Business Media LLC
Date: 24-08-2006
Publisher: SAGE Publications
Date: 2022
DOI: 10.1177/20552076221115017
Abstract: To investigate the feasibility of the be.well app and its personalization approach which regularly considers users’ preferences, amongst university students. We conducted a mixed-methods, pre-post experiment, where participants used the app for 2 months. Eligibility criteria included: age 18–34 years owning an iPhone with Internet access and fluency in English. Usability was assessed by a validated questionnaire engagement metrics were reported. Changes in physical activity were assessed by comparing the difference in daily step count between baseline and 2 months. Interviews were conducted to assess acceptability thematic analysis was conducted. Twenty-three participants were enrolled in the study (mean age = 21.9 years, 71.4% women). The mean usability score was 5.6 ± 0.8 out of 7. The median daily engagement time was 2 minutes. Eighteen out of 23 participants used the app in the last month of the study. Qualitative data revealed that people liked the personalized activity suggestion feature as it was actionable and promoted user autonomy. Some users also expressed privacy concerns if they had to provide a lot of personal data to receive highly personalized features. Daily step count increased after 2 months of the intervention (median difference = 1953 steps/day, p-value .001, 95% CI 782 to 3112). Incorporating users’ preferences in personalized advice provided by a physical activity app was considered feasible and acceptable, with preliminary support for its positive effects on daily step count. Future randomized studies with longer follow up are warranted to determine the effectiveness of personalized mobile apps in promoting physical activity.
Publisher: Springer International Publishing
Date: 2017
Publisher: Springer Science and Business Media LLC
Date: 19-09-2018
DOI: 10.1038/S41746-018-0054-0
Abstract: Social media data can be used with digital phenotyping tools to profile the attitudes, behaviours, and health outcomes of people. While there are a growing number of ex les demonstrating the performance of digital phenotyping tools using social media data, little is known about their capacity to support the delivery of targeted and personalised behaviour change interventions to improve health. Similar tools are already used in marketing and politics, using in idual profiling to manipulate purchasing and voting behaviours. The coupling of digital phenotyping tools and behaviour change interventions may play a more positive role in preventive medicine to improve health behaviours, but potential risks and unintended consequences may come from embedding behavioural interventions in social spaces.
Publisher: JMIR Publications Inc.
Date: 04-07-2019
Abstract: he personalization of conversational agents with natural language user interfaces is seeing increasing use in health care applications, shaping the content, structure, or purpose of the dialogue between humans and conversational agents. he goal of this systematic review was to understand the ways in which personalization has been used with conversational agents in health care and characterize the methods of its implementation. e searched on PubMed, Embase, CINAHL, PsycInfo, and ACM Digital Library using a predefined search strategy. The studies were included if they: (1) were primary research studies that focused on consumers, caregivers, or health care professionals (2) involved a conversational agent with an unconstrained natural language interface (3) tested the system with human subjects and (4) implemented personalization features. he search found 1958 publications. After abstract and full-text screening, 13 studies were included in the review. Common ex les of personalized content included feedback, daily health reports, alerts, warnings, and recommendations. The personalization features were implemented without a theoretical framework of customization and with limited evaluation of its impact. While conversational agents with personalization features were reported to improve user satisfaction, user engagement and dialogue quality, the role of personalization in improving health outcomes was not assessed directly. ost of the studies in our review implemented the personalization features without theoretical or evidence-based support for them and did not leverage the recent developments in other domains of personalization. Future research could incorporate personalization as a distinct design factor with a more careful consideration of its impact on health outcomes and its implications on patient safety, privacy, and decision-making.
Publisher: Oxford University Press
Date: 30-10-2013
Publisher: JMIR Publications Inc.
Date: 04-11-2019
DOI: 10.2196/14007
Abstract: Tools used to appraise the credibility of health information are time-consuming to apply and require context-specific expertise, limiting their use for quickly identifying and mitigating the spread of misinformation as it emerges. The aim of this study was to estimate the proportion of vaccine-related Twitter posts linked to Web pages of low credibility and measure the potential reach of those posts. S ling from 143,003 unique vaccine-related Web pages shared on Twitter between January 2017 and March 2018, we used a 7-point checklist adapted from validated tools and guidelines to manually appraise the credibility of 474 Web pages. These were used to train several classifiers (random forests, support vector machines, and recurrent neural networks) using the text from a Web page to predict whether the information satisfies each of the 7 criteria. Estimating the credibility of all other Web pages, we used the follower network to estimate potential exposures relative to a credibility score defined by the 7-point checklist. The best-performing classifiers were able to distinguish between low, medium, and high credibility with an accuracy of 78% and labeled low-credibility Web pages with a precision of over 96%. Across the set of unique Web pages, 11.86% (16,961 of 143,003) were estimated as low credibility and they generated 9.34% (1.64 billion of 17.6 billion) of potential exposures. The 100 most popular links to low credibility Web pages were each potentially seen by an estimated 2 million to 80 million Twitter users globally. The results indicate that although a small minority of low-credibility Web pages reach a large audience, low-credibility Web pages tend to reach fewer users than other Web pages overall and are more commonly shared within certain subpopulations. An automatic credibility appraisal tool may be useful for finding communities of users at higher risk of exposure to low-credibility vaccine communications.
Publisher: Elsevier BV
Date: 02-2019
Publisher: Elsevier BV
Date: 12-2018
Publisher: Oxford University Press (OUP)
Date: 31-01-2005
DOI: 10.1197/JAMIA.M1717
Publisher: Elsevier BV
Date: 05-2019
DOI: 10.1016/J.JCLINEPI.2019.01.009
Abstract: To determine whether certain trial characteristics are associated with faster or more frequent inclusion in systematic reviews for drug interventions in type 2 diabetes. We examined trials included in systematic reviews published between January 1, 2007 and January 1, 2017. Primary outcomes were time between trial publication and first inclusion in a systematic review and frequency of inclusion in systematic reviews over the study period. Multivariable Cox proportional hazards and regression models quantified associations with funding source, number of participants, trial conclusion, and journal impact factor. Among 668 trials, the median time to inclusion was 76.1 weeks. Time to inclusion was shorter for trials with industry funding (hazard ratio [HR] 1.39 95% confidence interval [CI] 1.13-1.71), more participants (HR 1.26 95% CI 1.17-1.36), and published in higher impact factor journals (HR 1.28 95% CI 1.14-1.45). The median frequency of inclusion was three. Frequency of inclusion was greater for trials with industry funding (relative risk [RR] 2.36 95% CI 2.11-2.64), more participants (RR 1.51 95% CI 1.47-1.55), positive conclusions (RR 1.89 95% CI 1.68-2.13), and published in higher impact factor journals (RR 1.13 95% CI 1.08-1.18). Certain trial characteristics are associated with faster or more frequent trial inclusion in systematic reviews of type 2 diabetes.
Publisher: JMIR Publications Inc.
Date: 28-03-2019
DOI: 10.2196/12181
Publisher: Public Library of Science (PLoS)
Date: 23-12-2013
Publisher: Oxford University Press (OUP)
Date: 22-01-2011
DOI: 10.1093/BIOINFORMATICS/BTR036
Abstract: Motivation: Larger than gene structures (LGS) are DNA segments that include at least one gene and often other segments such as inverted repeats and gene promoters. Mobile genetic elements (MGE) such as integrons are LGS that play an important role in horizontal gene transfer, primarily in Gram-negative organisms. Known LGS have a profound effect on organism virulence, antibiotic resistance and other properties of the organism due to the number of genes involved. Expert-compiled grammars have been shown to be an effective computational representation of LGS, well suited to automating annotation, and supporting de novo gene discovery. However, development of LGS grammars by experts is labour intensive and restricted to known LGS. Objectives: This study uses computational grammar inference methods to automate LGS discovery. We compare the ability of six algorithms to infer LGS grammars from DNA sequences annotated with genes and other short sequences. We compared the predictive power of learned grammars against an expert-developed grammar for gene cassette arrays found in Class 1, 2 and 3 integrons, which are modular LGS containing up to 9 of about 240 cassette types. Results: Using a Bayesian generalization algorithm our inferred grammar was able to predict & 95% of MGE structures in a corpus of 1760 sequences obtained from Genbank (F-score 75%). Even with 100% noise added to the training and test sets, we obtained an F-score of 68%, indicating that the method is robust and has the potential to predict de novo LGS structures when the underlying gene features are known. Availability: www2.chi.unsw.edu.au/attacca. Contact: guyt@unsw.edu.au
Publisher: Oxford University Press (OUP)
Date: 14-11-2022
Abstract: To summarize the research literature evaluating automated methods for early detection of safety problems with health information technology (HIT). We searched bibliographic databases including MEDLINE, ACM Digital, Embase, CINAHL Complete, PsycINFO, and Web of Science from January 2010 to June 2021 for studies evaluating the performance of automated methods to detect HIT problems. HIT problems were reviewed using an existing classification for safety concerns. Automated methods were categorized into rule-based, statistical, and machine learning methods, and their performance in detecting HIT problems was assessed. The review was conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta Analyses extension for Scoping Reviews statement. Of the 45 studies identified, the majority (n = 27, 60%) focused on detecting use errors involving electronic health records and order entry systems. Machine learning (n = 22) and statistical modeling (n = 17) were the most common methods. Unsupervised learning was used to detect use errors in laboratory test results, prescriptions, and patient records while supervised learning was used to detect technical errors arising from hardware or software issues. Statistical modeling was used to detect use errors, unauthorized access, and clinical decision support system malfunctions while rule-based methods primarily focused on use errors. A wide variety of rule-based, statistical, and machine learning methods have been applied to automate the detection of safety problems with HIT. Many opportunities remain to systematically study their application and effectiveness in real-world settings.
Publisher: Georg Thieme Verlag KG
Date: 08-2016
DOI: 10.15265/IY-2016-014
Abstract: Introduction: The introduction of health information technology into clinical settings is associated with unintended negative consequences, some with the potential to lead to error and patient harm. As adoption rates soar, the impact of these hazards will increase. Objective: Over the last decade, unintended consequences have received great attention in the medical informatics literature, and this paper seeks to identify the major themes that have emerged. Results: Rich typologies of the causes of unintended consequences have been developed, along with a number of explanatory frameworks based on socio-technical systems theory. We however still have only limited data on the frequency and impact of these events, as most studies rely on data sets from incident reporting or patient chart reviews, rather than undertaking detailed observational studies. Such data are increasingly needed as more organizations implement health information technologies. When outcome studies have been done in different organizations, they reveal different outcomes for identical systems. From a theoretical perspective, recent advances in the emerging discipline of implementation science have much to offer in explaining the origin, and variability, of unintended consequences. Conclusion: The dynamic nature of health care service organizations, and the rapid development and adoption of health information technologies means that unintended consequences are unlikely to disappear, and we therefore must commit to developing robust systems to detect and manage them.
Publisher: Oxford University Press (OUP)
Date: 21-11-2004
DOI: 10.1197/JAMIA.M1385
Publisher: American Association for the Advancement of Science (AAAS)
Date: 02-05-2012
DOI: 10.1126/SCITRANSLMED.3003682
Abstract: The open-source software movement can serve as a model for a similar initiative in the clinical trial community.
Publisher: Public Library of Science (PLoS)
Date: 10-03-2010
Publisher: BMJ
Date: 04-2021
DOI: 10.1136/BMJHCI-2020-100301
Abstract: To examine how and to what extent medical devices using machine learning (ML) support clinician decision making. We searched for medical devices that were (1) approved by the US Food and Drug Administration (FDA) up till February 2020 (2) intended for use by clinicians (3) in clinical tasks or decisions and (4) used ML. Descriptive information about the clinical task, device task, device input and output, and ML method were extracted. The stage of human information processing automated by ML-based devices and level of autonomy were assessed. Of 137 candidates, 59 FDA approvals for 49 unique devices were included. Most approvals (n=51) were since 2018. Devices commonly assisted with diagnostic (n=35) and triage (n=10) tasks. Twenty-three devices were assistive, providing decision support but left clinicians to make important decisions including diagnosis. Twelve automated the provision of information (autonomous information), such as quantification of heart ejection fraction, while 14 automatically provided task decisions like triaging the reading of scans according to suspected findings of stroke (autonomous decisions). Stages of human information processing most automated by devices were information analysis, (n=14) providing information as an input into clinician decision making, and decision selection (n=29), where devices provide a decision. Leveraging the benefits of ML algorithms to support clinicians while mitigating risks, requires a solid relationship between clinician and ML-based devices. Such relationships must be carefully designed, considering how algorithms are embedded in devices, the tasks supported, information provided and clinicians’ interactions with them.
Publisher: Oxford University Press (OUP)
Date: 23-06-2017
Publisher: Georg Thieme Verlag KG
Date: 08-2016
DOI: 10.15265/IY-2016-018
Abstract: Introduction: Anyone with knowledge of information systems has experienced frustration when it comes to system implementation or use. Unanticipated challenges arise frequently and unanticipated consequences may follow. Objective: Working from first principles, to understand why information technology (IT) is often challenging, identify which IT endeavors are more likely to succeed, and predict the best role that technology can play in different tasks and settings. Results: The fundamental purpose of IT is to enhance our ability to undertake tasks, supplying new information that changes what we decide and ultimately what occurs in the world. The value of this information (VOI) can be calculated at different stages of the decision-making process and will vary depending on how technology is used. We can imagine a task space that describes the relative benefits of task completion by humans or computers and that contains specific areas where humans or computers are superior. There is a third area where neither is strong and a final joint workspace where humans and computers working in partnership produce the best results. Conclusion: By understanding that information has value and that VOI can be quantified, we can make decisions about how best to support the work we do. Evaluation of the expected utility of task completion by humans or computers should allow us to decide whether solutions should depend on technology, humans, or a partnership between the two.
Publisher: Springer Science and Business Media LLC
Date: 16-10-2018
DOI: 10.1038/S41746-018-0066-9
Abstract: Current generation electronic health records suffer a number of problems that make them inefficient and associated with poor clinical satisfaction. Digital scribes or intelligent documentation support systems, take advantage of advances in speech recognition, natural language processing and artificial intelligence, to automate the clinical documentation task currently conducted by humans. Whilst in their infancy, digital scribes are likely to evolve through three broad stages. Human led systems task clinicians with creating documentation, but provide tools to make the task simpler and more effective, for ex le with dictation support, semantic checking and templates. Mixed-initiative systems are delegated part of the documentation task, converting the conversations in a clinical encounter into summaries suitable for the electronic record. Computer-led systems are delegated full control of documentation and only request human interaction when exceptions are encountered. Intelligent clinical environments permit such augmented clinical encounters to occur in a fully digitised space where the environment becomes the computer. Data from clinical instruments can be automatically transmitted, interpreted using AI and entered directly into the record. Digital scribes raise many issues for clinical practice, including new patient safety risks. Automation bias may see clinicians automatically accept scribe documents without checking. The electronic record also shifts from a human created summary of events to potentially a full audio, video and sensor record of the clinical encounter. Digital scribes promisingly offer a gateway into the clinical workflow for more advanced support for diagnostic, prognostic and therapeutic tasks.
Publisher: JMIR Publications Inc.
Date: 08-05-2020
Abstract: martphone apps, fitness trackers, and online social networks have shown promise in weight management and physical activity interventions. However, there are knowledge gaps in identifying the most effective and engaging interventions and intervention features preferred by their users. his 6-month pilot study on a social networking mobile app connected to wireless weight and activity tracking devices has 2 main aims: to evaluate changes in BMI, weight, and physical activity levels in users from different BMI categories and to assess user perspectives on the intervention, particularly on social comparison and automated self-monitoring and feedback features. his was a mixed methods study involving a one-arm, pre-post quasi-experimental pilot with postintervention interviews and focus groups. Healthy young adults used a social networking mobile app intervention integrated with wireless tracking devices (a weight scale and a physical activity tracker) for 6 months. Quantitative results were analyzed separately for 2 groups—underweight-normal and overweight-obese BMI—using i t /i tests and Wilcoxon sum rank, Wilcoxon signed rank, and chi-square tests. Weekly BMI change in participants was explored using linear mixed effects analysis. Interviews and focus groups were analyzed inductively using thematic analysis. n total, 55 participants were recruited (mean age of 23.6, SD 4.6 years 28 women) and 45 returned for the final session (n=45, 82% retention rate). There were no differences in BMI from baseline to postintervention (6 months) and between the 2 BMI groups. However, at 4 weeks, participants’ BMI decreased by 0.34 kg/m sup /sup ( i P /i & .001), with a loss of 0.86 kg/m sup /sup in the overweight-obese group ( i P /i =.01). Participants in the overweight-obese group used the app significantly less compared with in iduals in the underweight-normal BMI group, as they mentioned negative feelings and demotivation from social comparison, particularly from upward comparison with fitter people. Participants in the underweight-normal BMI group were avid users of the app’s self-monitoring and feedback ( i P /i =.02) and social ( i P /i =.04) features compared with those in the overweight-obese group, and they significantly increased their daily step count over the 6-month study duration by an average of 2292 steps (95% CI 898-3370 i P /i & .001). Most participants mentioned a desire for a more personalized intervention. his study shows the effects of different interventions on participants from higher and lower BMI groups and different perspectives regarding the intervention, particularly with respect to its social features. Participants in the overweight-obese group did not sustain a short-term decrease in their BMI and mentioned negative emotions from app use, while participants in the underweight-normal BMI group used the app more frequently and significantly increased their daily step count. These differences highlight the importance of intervention personalization. Future research should explore the role of personalized features to help overcome personal barriers and better match in idual preferences and needs.
Publisher: BMJ
Date: 25-10-2014
Publisher: Springer Science and Business Media LLC
Date: 21-05-2014
Publisher: Public Library of Science (PLoS)
Date: 04-04-2011
Publisher: Elsevier BV
Date: 09-2023
Publisher: JMIR Publications Inc.
Date: 18-03-2021
Abstract: utomatic severity assessment and progression prediction can facilitate admission, triage, and referral of COVID-19 patients. his study aims to explore the potential use of lung lesion features in the management of COVID-19, based on the assumption that lesion features may carry important diagnostic and prognostic information for quantifying infection severity and forecasting disease progression. novel LesionEncoder framework is proposed to detect lesions in chest CT scans and to encode lesion features for automatic severity assessment and progression prediction. The LesionEncoder framework consists of a U-Net module for detecting lesions and extracting features from in idual CT slices, and a recurrent neural network (RNN) module for learning the relationship between feature vectors and collectively classifying the sequence of feature vectors. hest CT scans of two cohorts of COVID-19 patients from two hospitals in China were used for training and testing the proposed framework. When applied to assessing severity, this framework outperformed baseline methods achieving a sensitivity of 0.818, specificity of 0.952, accuracy of 0.940, and AUC of 0.903. It also outperformed the other tested methods in disease progression prediction with a sensitivity of 0.667, specificity of 0.838, accuracy of 0.829, and AUC of 0.736. he LesionEncoder framework demonstrates a strong potential for clinical application in current COVID-19 management, particularly in automatic severity assessment of COVID-19 patients. This framework also has a potential for other lesion-focused medical image analyses. e performed a retrospective in China. This multicentre study was approved by the institutional review board of the principal investigator’s hospital. Informed consent from patients was exempted due to the retrospective nature of this study.
Publisher: AMPCo
Date: 28-10-2020
DOI: 10.5694/MJA2.50821
Publisher: BMJ
Date: 06-01-1996
Publisher: Elsevier BV
Date: 03-2018
DOI: 10.1016/J.JCLINEPI.2017.12.007
Abstract: Trial registries can be used to measure reporting biases and support systematic reviews, but 45% of registrations do not provide a link to the article reporting on the trial. We evaluated the use of document similarity methods to identify unreported links between ClinicalTrials.gov and PubMed. We extracted terms and concepts from a data set of 72,469 ClinicalTrials.gov registrations and 276,307 PubMed articles and tested methods for ranking articles across 16,005 reported links and 90 manually identified unreported links. Performance was measured by the median rank of matching articles and the proportion of unreported links that could be found by screening ranked candidate articles in order. The best-performing concept-based representation produced a median rank of 3 (interquartile range [IQR] 1-21) for reported links and 3 (IQR 1-19) for the manually identified unreported links, and term-based representations produced a median rank of 2 (1-20) for reported links and 2 (IQR 1-12) in unreported links. The matching article was ranked first for 40% of registrations, and screening 50 candidate articles per registration identified 86% of the unreported links. Leveraging the growth in the corpus of reported links between ClinicalTrials.gov and PubMed, we found that document similarity methods can assist in the identification of unreported links between trial registrations and corresponding articles.
Publisher: Oxford University Press (OUP)
Date: 09-2012
Publisher: Elsevier BV
Date: 12-2022
Publisher: Oxford University Press (OUP)
Date: 2012
Publisher: JMIR Publications Inc.
Date: 23-11-2020
Abstract: he experiences of patients change throughout their illness trajectory and differ according to their medical history, but digital support tools are often designed for one specific moment in time and do not change with the patient as their health state changes. This presents a fragmented support pattern where patients have to move from one app to another as they move between health states, and some subpopulations of patients do not have their needs addressed at all. his study aims to investigate how patient work evolves over time for those living with type 2 diabetes mellitus and chronic multimorbidity, and explore the implications for digital support system design. n total, 26 patients with type 2 diabetes mellitus and chronic multimorbidity were recruited. Each interview was conducted twice, and interviews were transcribed and analyzed according to the Chronic Illness Trajectory Model. our unique illness trajectories were identified with different patient work goals and needs: living with stable chronic conditions involves patients seeking to make patient work as routinized and invisible as possible dealing with cycles of acute or crisis episodes included heavily multimorbid patients who sought support with therapy adherence responding to unstable changes described patients currently experiencing rapid health changes and increasing patient work intensity and coming back from crisis focused on patients coping with a loss of normalcy. atient work changes over time based on the experiences of the in idual, and its timing and trajectory need to be considered when designing digital support interventions. R2-10.1136/bmjopen-2018-022163
Publisher: Elsevier BV
Date: 10-2005
Publisher: Oxford University Press (OUP)
Date: 05-2000
DOI: 10.1136/JAMIA.2000.0070277
Abstract: While largely ignored in informatics thinking, the clinical communication space accounts for the major part of the information flow in health care. Growing evidence indicates that errors in communication give rise to substantial clinical morbidity and mortality. This paper explores the implications of acknowledging the primacy of the communication space in informatics and explores some solutions to communication difficulties. It also examines whether understanding the dynamics of communication between human beings can also improve the way we design information systems in health care. Using the concept of common ground in conversation, proposals are suggested for modeling the common ground between a system and human users. Such models provide insights into when communication or computational systems are better suited to solving information problems.
Publisher: JMIR Publications Inc.
Date: 09-08-2019
Abstract: onversational agents (CAs) are systems that mimic human conversations using text or spoken language. Their widely used ex les include voice-activated systems such as Apple Siri, Google Assistant, Amazon Alexa, and Microsoft Cortana. The use of CAs in health care has been on the rise, but concerns about their potential safety risks often remain understudied. his study aimed to analyze how commonly available, general-purpose CAs on smartphones and smart speakers respond to health and lifestyle prompts (questions and open-ended statements) by examining their responses in terms of content and structure alike. e followed a piloted script to present health- and lifestyle-related prompts to 8 CAs. The CAs’ responses were assessed for their appropriateness on the basis of the prompt type: responses to safety-critical prompts were deemed appropriate if they included a referral to a health professional or service, whereas responses to lifestyle prompts were deemed appropriate if they provided relevant information to address the problem prompted. The response structure was also examined according to information sources (Web search–based or precoded), response content style (informative and/or directive), confirmation of prompt recognition, and empathy. he 8 studied CAs provided in total 240 responses to 30 prompts. They collectively responded appropriately to 41% (46/112) of the safety-critical and 39% (37/96) of the lifestyle prompts. The ratio of appropriate responses deteriorated when safety-critical prompts were rephrased or when the agent used a voice-only interface. The appropriate responses included mostly directive content and empathy statements for the safety-critical prompts and a mix of informative and directive content for the lifestyle prompts. ur results suggest that the commonly available, general-purpose CAs on smartphones and smart speakers with unconstrained natural language interfaces are limited in their ability to advise on both the safety-critical health prompts and lifestyle prompts. Our study also identified some response structures the CAs employed to present their appropriate responses. Further investigation is needed to establish guidelines for designing suitable response structures for different prompt types.
Publisher: Springer Science and Business Media LLC
Date: 14-10-2013
Publisher: BMJ
Date: 04-05-2002
Publisher: BMJ
Date: 15-03-2012
Publisher: BMJ
Date: 23-06-2011
DOI: 10.1136/BMJ.D3693
Publisher: JMIR Publications Inc.
Date: 19-03-2008
DOI: 10.2196/JMIR.974
Publisher: Springer Science and Business Media LLC
Date: 16-03-2017
Publisher: Wiley
Date: 03-03-2016
DOI: 10.1111/AJCO.12475
Abstract: Innovative e-health strategies are emerging, to tailor and provide convenient, systematic and high-quality survivorship care for an expanding cancer survivor population. This pilot study tests the application of an e-health platform, "Healthy.me," in a breast cancer survivor cohort at Liverpool and Macarthur Cancer Therapy Centres, New South Wales, Australia. Fifty breast cancer patients were recruited to use the Healthy.me website, designed by the Centre of Health Informatics at the University of New South Wales, over a 4-month period. Telephone and online questionnaires were used at 1 and 4 months and a face-to-face feedback at study completion, to gather qualitative and quantitative data regarding feasibility of Healthy.me. Healthy.me was reported to be a useful online resource by most users. Usage declined from 76% at 1 month to 48% at 4 months. Breast cancer survivors enjoyed a variety of tailored information regarding health and life-style issues. Positive aspects of Healthy.me were the convenient access to trusted information, and interaction with their peers and healthcare professionals. Barriers to usage contributing to usage decline were lack of reported patient time to re-access information, limited content updates and technical factors. This pilot study suggested the potential of an e-health strategy such as Healthy.me in addressing the needs of a growing breast cancer survivor population. Ongoing development of a more robust e-health resource and integration with primary care models is warranted.
Publisher: BMJ
Date: 05-11-2015
DOI: 10.1136/BMJQS-2015-004323
Abstract: To identify the categories of problems with information technology (IT), which affect patient safety in general practice. General practitioners (GPs) reported incidents online or by telephone between May 2012 and November 2013. Incidents were reviewed against an existing classification for problems associated with IT and the clinical process impacted. 87 GPs across Australia. Types of problems, consequences and clinical processes. GPs reported 90 incidents involving IT which had an observable impact on the delivery of care, including actual patient harm as well as near miss events. Practice systems and medications were the most affected clinical processes. Problems with IT disrupted clinical workflow, wasted time and caused frustration. Issues with user interfaces, routine updates to software packages and drug databases, and the migration of records from one package to another generated clinical errors that were unique to IT some could affect many patients at once. Human factors issues gave rise to some errors that have always existed with paper records but are more likely to occur and cause harm with IT. Such errors were linked to slips in concentration, multitasking, distractions and interruptions. Problems with patient identification and hybrid records generated errors that were in principle no different to paper records. Problems associated with IT include perennial risks with paper records, but additional disruptions in workflow and hazards for patients unique to IT, occasionally affecting multiple patients. Surveillance for such hazards may have general utility, but particularly in the context of migrating historical records to new systems and software updates to existing systems.
Publisher: Oxford University Press (OUP)
Date: 11-2007
DOI: 10.1197/JAMIA.M2462
Publisher: JMIR Publications Inc.
Date: 21-12-2018
DOI: 10.2196/11439
Publisher: JMIR Publications Inc.
Date: 08-05-2019
DOI: 10.2196/12881
Publisher: Springer Science and Business Media LLC
Date: 22-09-2017
Publisher: BMJ
Date: 29-07-2010
Abstract: To study the extent and execution of redundant processes during inpatient transfers to Radiology, and their impact on errors during the transfer process to explore the use of causal and reliability analyses for modelling error detection and redundancy in the transfer process and to provide guidance on potential system improvements. A prospective observational study at a metropolitan teaching hospital. 101 patient transfers to Radiology were observed over a 6-month period, and errors in patient transfer process were recorded. Fault Tree Analysis was used to model error paths and identify redundant steps. Reliability Analysis was used to quantify system reliability. 420 errors were noted, an average of four errors per transfer. No incidents of patient harm were recorded. Inadequate handover was the most common error (43.1%), followed by failure to perform patient identification checks (41.9%), patient inadequately prepared for transfer (7.4%), inadequate infection control precautions (2.9%), inadequate clinical escort (2.1%), inadequate transport vehicle (2.1%) and equipment failure (0.2%). Four redundant steps for communicating patients' infectious status were identified (reliability=0.07, 0.37, 0.26, 0.31). Collectively, these yielded a system reliability of 0.7. The low reliability of each in idual step was due to its low rate of execution. Analysis of the transfer process revealed a number of redundancies that safeguard against transfer errors. However, they were relatively ineffective in preventing errors, due to the poor compliance rate. Thus, the authors advocate increasing compliance to existing redundant processes as an improvement strategy, before investing resources on new processes.
Publisher: JMIR Publications Inc.
Date: 08-11-2019
DOI: 10.2196/16323
Abstract: Although much effort is focused on improving the technical performance of artificial intelligence, there are compelling reasons to focus more on the implementation of this technology class to solve real-world applications. In this “last mile” of implementation lie many complex challenges that may make technically high-performing systems perform poorly. Instead of viewing artificial intelligence development as a linear one of algorithm development through to eventual deployment, there are strong reasons to take a more agile approach, iteratively developing and testing artificial intelligence within the context in which it finally will be used.
Publisher: Elsevier BV
Date: 11-2021
DOI: 10.1016/J.JBI.2021.103921
Abstract: Anxiety disorders are common among youth, posing risks to physical and mental health development. Early screening can help identify such disorders and pave the way for preventative treatment. To this end, the Youth Online Diagnostic Assessment (YODA) tool was developed and deployed to predict youth disorders using online screening questionnaires filled by parents. YODA facilitated collection of several novel unique datasets of self-reported anxiety disorder symptoms. Since the data is self-reported and often noisy, feature selection needs to be performed on the raw data to improve accuracy. However, a single set of selected features may not be informative enough. Consequently, in this work we propose and evaluate a novel feature ensemble based Bayesian Neural Network (FE-BNN) that exploits an ensemble of features for improving the accuracy of disorder predictions. We evaluate the performance of FE-BNN on three disorder-specific datasets collected by YODA. Our method achieved the AUC of 0.8683, 0.8769, 0.9091 for the predictions of Separation Anxiety Disorder, Generalized Anxiety Disorder and Social Anxiety Disorder, respectively. These results provide initial evidence that our method outperforms the original diagnostic scoring function of YODA and several other baseline methods for three anxiety disorders, which can practically help prioritizing diagnostic interviews. Our promising results call for investigation of interpretable methods maintaining high predictive accuracy.
Publisher: Elsevier BV
Date: 10-2007
DOI: 10.1016/J.IJMEDINF.2006.06.009
Abstract: Information retrieval systems have the potential to improve patient care but little is known about the variables which influence clinicians' uptake and use of systems in routine work. To determine which factors influenced use of an online evidence retrieval system. Computer logs and pre- and post-system survey analysis of a 4-week clinical trial of the Quick Clinical online evidence system involving 227 general practitioners across Australia. Online evidence use was not linked to general practice training or clinical experience but female clinicians conducted more searches than their male counterparts (mean use=14.38 searches, S.D.=11.68 versus mean use=8.50 searches, S.D.=9.99 t=2.67, d.f.=157, P=0.008). Practice characteristics such as hours worked, type and geographic location of clinic were not associated with search activity. Information seeking was also not related to participants' perceived information needs, computer skills, training nor Internet connection speed. Clinicians who reported direct improvements in patient care as a result of system use had significantly higher rates of system use than other users (mean use=12.55 searches, S.D.=13.18 versus mean use=8.15 searches, S.D.=9.18 t=2.322, d.f.=154 P=0.022). Comparison of participants' views pre- and post- the trial, showed that post-trial clinicians expressed more positive views about searching for information during a consultation (chi(2)=27.40, d.f.=4, P< or =0.001) and a significantly greater number reported seeking information between consultations as a result of having access to an online evidence system in their consulting rooms (chi(2)=9.818, d.f.=2, P=0.010). Clinicians' use of an online evidence system was directly related to their reported experiences of improvements in patient care. Post-trial clinicians positively changed their views about having time to search for information and pursued more questions during clinic hours.
Publisher: Oxford University Press (OUP)
Date: 2012
Publisher: AMPCo
Date: 07-2012
DOI: 10.5694/MJA12.10510
Publisher: Oxford University Press (OUP)
Date: 11-01-2012
Start Date: 2004
End Date: 07-2008
Amount: $300,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 10-2006
End Date: 06-2009
Amount: $265,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 2003
End Date: 12-2011
Amount: $502,228.00
Funder: Australian Research Council
View Funded ActivityStart Date: 06-2007
End Date: 12-2010
Amount: $427,266.00
Funder: Australian Research Council
View Funded ActivityStart Date: 07-2006
End Date: 12-2009
Amount: $336,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 02-2004
End Date: 03-2005
Amount: $10,000.00
Funder: Australian Research Council
View Funded Activity