ORCID Profile
0000-0003-1023-3927
Current Organisations
Oxford University Hospitals NHS Trust
,
University of Oxford
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
Publisher: Wiley
Date: 12-10-2020
DOI: 10.1111/TME.12728
Publisher: BMJ
Date: 09-2022
DOI: 10.1136/BMJOPEN-2021-057614
Abstract: Most patients admitted to hospital recover with treatments that can be administered on the general ward. A small but important group deteriorate however and require augmented organ support in areas with increased nursing to patient ratios. In observational studies evaluating this cohort, proxy outcomes such as unplanned intensive care unit admission, cardiac arrest and death are used. These outcome measures introduce subjectivity and variability, which in turn hinders the development and accuracy of the increasing numbers of electronic medical record (EMR) linked digital tools designed to predict clinical deterioration. Here, we describe a protocol for developing a new outcome measure using mixed methods to address these limitations. We will undertake firstly, a systematic literature review to identify existing generic, syndrome-specific and organ-specific definitions for clinically deteriorated, hospitalised adult patients. Secondly, an international modified Delphi study to generate a short list of candidate definitions. Thirdly, a nominal group technique (NGT) (using a trained facilitator) will take a erse group of stakeholders through a structured process to generate a consensus definition. The NGT process will be informed by the data generated from the first two stages. The definition(s) for the deteriorated ward patient will be readily extractable from the EMR. This study has ethics approval (reference 16399) from the Central Adelaide Local Health Network Human Research Ethics Committee. Results generated from this study will be disseminated through publication and presentation at national and international scientific meetings.
Publisher: Cold Spring Harbor Laboratory
Date: 04-10-2022
DOI: 10.1101/2022.10.03.22280649
Abstract: This study aims to explore the impact of COVID-19 vaccination on critical care by examining associations between vaccination and admission to critical care with COVID-19 during England’s Delta wave, by age group, dose, and over time. We used linked routinely-collected data to conduct a population cohort study of patients admitted to adult critical care in England for management of COVID-19 between 1 May and 15 December 2021. Included participants were the whole population of England aged 18 years or over (44.7 million), including 10,141 patients admitted to critical care with COVID-19. The intervention was vaccination with one, two, or a booster/three doses of any COVID-19 vaccine. Compared with unvaccinated patients, vaccinated patients were older (median 64 years for patients receiving two or more doses versus 50 years for unvaccinated), with higher levels of severe comorbidity (20.3% versus 3.9%) and immunocompromise (15.0% versus 2.3%). Compared with patients who were unvaccinated, those vaccinated with two doses had a relative risk reduction (RRR) of between 90.1% (patients aged 18–29, 95% CI, 86.8% to 92.7%) and 95.9% (patients aged 60–69, 95% CI, 95.5% to 96.2%). Waning was only observed for those aged 70+, for whom the RRR reduced from 97.3% (91.0% to 99.2%) to 86.7% (85.3% to 90.1%) between May and December but increased again to 98.3% (97.6% to 98.8%) with a booster/third dose. Important demographic and clinical differences exist between vaccinated and unvaccinated patients admitted to critical care with COVID-19. While not a causal analysis, our findings are consistent with a substantial and sustained impact of vaccination on reducing admissions to critical care during England’s Delta wave, with evidence of waning predominantly restricted to those aged 70+.
Publisher: Elsevier BV
Date: 2019
Publisher: BMJ
Date: 02-2021
DOI: 10.1136/BMJOQ-2020-001145
Abstract: Identifying how human factors affect clinical staff recognition and managment of the deteriorating ward patient may inform process improvements. We systematically reviewed the literature to identify (1) how human factors affect ward care escalation (2) gaps in the current literature and (3) critique literature methodologies. We undertook a Qualitative Evidence Synthesis of care escalation studies. We searched MEDLINE, EMBASE and CINHAL from inception to September 2019. We used the Critical Appraisal Skills Programme and the Grading of Recommendations Assessment-Development and Evaluation and Confidence in Evidence from Reviews of Qualitative Research tool to assess study quality. Our search identified 24 studies meeting the inclusion criteria. Confidence in findings was moderate (20 studies) to high (4 studies). In 16 studies, the ability to recognise changes in the patient’s condition (soft signals), including skin colour/temperature, respiratory pattern, blood loss, personality change, patient complaint and fatigue, improved the ability to escalate patients. Soft signals were detected through patient assessment (looking/listening/feeling) and not Early Warning Scores (eight studies). In contrast, 13 studies found a high workload and low staffing levels reduced staff’s ability to detect patient deterioration and escalate care. In eight studies quantifiable deterioration evidence (Early Warning Scores) facilitated escalation communication, particularly when referrer/referee were unfamiliar. Conversely, escalating concerning non-triggering patients was challenging but achieved by some clinical staff (three studies). Team decision making facilitated the clinical escalation (six studies). Early Warning Scores have clinical benefits but can sometimes impede escalation in patients not meeting the threshold. Staff use other factors (soft signals) not captured in Early Warning Scores to escalate care. The literature supports strategies that improve the escalation process such as good patient assessment skills. CRD42018104745.
Publisher: Springer Science and Business Media LLC
Date: 28-03-2017
Publisher: BMJ
Date: 03-2020
DOI: 10.1136/BMJOPEN-2019-034774
Abstract: The aim of this review is to summarise the latest evidence on efficacy and safety of treatments for new-onset atrial fibrillation (NOAF) in critical illness. Critically ill adult patients who developed NOAF during admission. Primary outcomes were efficacy in achieving rate or rhythm control, as defined in each study. Secondary outcomes included mortality, stroke, bleeding and adverse events. We searched MEDLINE, EMBASE and Web of Knowledge on 11 March 2019 to identify randomised controlled trials (RCTs) and observational studies reporting treatment efficacy for NOAF in critically ill patients. Data were extracted, and quality assessment was performed using the Cochrane Risk of Bias Tool, and an adapted Newcastle-Ottawa Scale. Of 1406 studies identified, 16 remained after full-text screening including two RCTs. Study quality was generally low due to a lack of randomisation, absence of blinding and small cohorts. Amiodarone was the most commonly studied agent (10 studies), followed by beta-blockers (8), calcium channel blockers (6) and magnesium (3). Rates of successful rhythm control using amiodarone varied from 30.0% to 95.2%, beta-blockers from 31.8% to 92.3%, calcium channel blockers from 30.0% to 87.1% and magnesium from 55.2% to 77.8%. Adverse effects of treatment were rarely reported (five studies). The reported efficacy of beta-blockers, calcium channel blockers, magnesium and amiodarone for achieving rhythm control was highly varied. As there is currently significant variation in how NOAF is managed in critically ill patients, we recommend future research focuses on comparing the efficacy and safety of amiodarone, beta-blockers and magnesium. Further research is needed to inform the decision surrounding anticoagulant use in this patient group.
Publisher: Springer Science and Business Media LLC
Date: 21-07-2021
DOI: 10.1186/S13054-021-03684-5
Abstract: New-onset atrial fibrillation (NOAF) in patients treated on an intensive care unit (ICU) is common and associated with significant morbidity and mortality. We undertook a systematic scoping review to summarise comparative evidence to inform NOAF management for patients admitted to ICU. We searched MEDLINE, EMBASE, CINAHL, Web of Science, OpenGrey, Cochrane Database of Systematic Reviews, Cochrane Central Register of Controlled Trials, Database of Abstracts of Reviews of Effects, ISRCTN, ClinicalTrials.gov, EU Clinical Trials register, additional WHO ICTRP trial databases, and NIHR Clinical Trials Gateway in March 2019. We included studies evaluating treatment or prevention strategies for NOAF or acute anticoagulation in general medical, surgical or mixed adult ICUs. We extracted study details, population characteristics, intervention and comparator(s), methods addressing confounding, results, and recommendations for future research onto study-specific forms. Of 3,651 citations, 42 articles were eligible: 25 primary studies, 12 review articles and 5 surveys/opinion papers. Definitions of NOAF varied between NOAF lasting 30 s to NOAF lasting 24 h. Only one comparative study investigated effects of anticoagulation. Evidence from small RCTs suggests calcium channel blockers (CCBs) result in slower rhythm control than beta blockers (1 study), and more cardiovascular instability than amiodarone (1 study). Evidence from 4 non-randomised studies suggests beta blocker and amiodarone therapy may be equivalent in respect to rhythm control. Beta blockers may be associated with improved survival compared to amiodarone, CCBs, and digoxin, though supporting evidence is subject to confounding. Currently, the limited evidence does not support therapeutic anticoagulation during ICU admission. From the limited evidence available beta blockers or amiodarone may be superior to CCBs as first line therapy in undifferentiated patients in ICU. The little evidence available does not support therapeutic anticoagulation for NOAF whilst patients are critically ill. Consensus definitions for NOAF, rate and rhythm control are needed.
Publisher: Elsevier BV
Date: 02-2022
Publisher: National Institute for Health and Care Research
Date: 04-2022
Abstract: Data corrected and figures revised.
Publisher: BMJ
Date: 26-08-2021
DOI: 10.1136/BMJ.N1931
Abstract: To assess the association between covid-19 vaccines and risk of thrombocytopenia and thromboembolic events in England among adults. Self-controlled case series study using national data on covid-19 vaccination and hospital admissions. Patient level data were obtained for approximately 30 million people vaccinated in England between 1 December 2020 and 24 April 2021. Electronic health records were linked with death data from the Office for National Statistics, SARS-CoV-2 positive test data, and hospital admission data from the United Kingdom’s health service (NHS). 29 121 633 people were vaccinated with first doses (19 608 008 with Oxford-AstraZeneca (ChAdOx1 nCoV-19) and 9 513 625 with Pfizer-BioNTech (BNT162b2 mRNA)) and 1 758 095 people had a positive SARS-CoV-2 test. People aged ≥16 years who had first doses of the ChAdOx1 nCoV-19 or BNT162b2 mRNA vaccines and any outcome of interest were included in the study. The primary outcomes were hospital admission or death associated with thrombocytopenia, venous thromboembolism, and arterial thromboembolism within 28 days of three exposures: first dose of the ChAdOx1 nCoV-19 vaccine first dose of the BNT162b2 mRNA vaccine and a SARS-CoV-2 positive test. Secondary outcomes were subsets of the primary outcomes: cerebral venous sinus thrombosis (CVST), ischaemic stroke, myocardial infarction, and other rare arterial thrombotic events. The study found increased risk of thrombocytopenia after ChAdOx1 nCoV-19 vaccination (incidence rate ratio 1.33, 95% confidence interval 1.19 to 1.47 at 8-14 days) and after a positive SARS-CoV-2 test (5.27, 4.34 to 6.40 at 8-14 days) increased risk of venous thromboembolism after ChAdOx1 nCoV-19 vaccination (1.10, 1.02 to 1.18 at 8-14 days) and after SARS-CoV-2 infection (13.86, 12.76 to 15.05 at 8-14 days) and increased risk of arterial thromboembolism after BNT162b2 mRNA vaccination (1.06, 1.01 to 1.10 at 15-21 days) and after SARS-CoV-2 infection (2.02, 1.82 to 2.24 at 15-21 days). Secondary analyses found increased risk of CVST after ChAdOx1 nCoV-19 vaccination (4.01, 2.08 to 7.71 at 8-14 days), after BNT162b2 mRNA vaccination (3.58, 1.39 to 9.27 at 15-21 days), and after a positive SARS-CoV-2 test increased risk of ischaemic stroke after BNT162b2 mRNA vaccination (1.12, 1.04 to 1.20 at 15-21 days) and after a positive SARS-CoV-2 test and increased risk of other rare arterial thrombotic events after ChAdOx1 nCoV-19 vaccination (1.21, 1.02 to 1.43 at 8-14 days) and after a positive SARS-CoV-2 test. Increased risks of haematological and vascular events that led to hospital admission or death were observed for short time intervals after first doses of the ChAdOx1 nCoV-19 and BNT162b2 mRNA vaccines. The risks of most of these events were substantially higher and more prolonged after SARS-CoV-2 infection than after vaccination in the same population.
Publisher: Wiley
Date: 05-2023
DOI: 10.1111/NICC.12921
Publisher: Elsevier BV
Date: 06-2020
Publisher: Springer Science and Business Media LLC
Date: 05-2022
DOI: 10.1038/S41591-022-01772-9
Abstract: A growing number of artificial intelligence (AI)-based clinical decision support systems are showing promising performance in preclinical, in silico evaluation, but few have yet demonstrated real benefit to patient care. Early-stage clinical evaluation is important to assess an AI system's actual clinical performance at small scale, ensure its safety, evaluate the human factors surrounding its use and pave the way to further large-scale trials. However, the reporting of these early studies remains inadequate. The present statement provides a multi-stakeholder, consensus-based reporting guideline for the Developmental and Exploratory Clinical Investigations of DEcision support systems driven by Artificial Intelligence (DECIDE-AI). We conducted a two-round, modified Delphi process to collect and analyze expert opinion on the reporting of early clinical evaluation of AI systems. Experts were recruited from 20 pre-defined stakeholder categories. The final composition and wording of the guideline was determined at a virtual consensus meeting. The checklist and the Explanation & Elaboration (E&E) sections were refined based on feedback from a qualitative evaluation process. In total, 123 experts participated in the first round of Delphi, 138 in the second round, 16 in the consensus meeting and 16 in the qualitative evaluation. The DECIDE-AI reporting guideline comprises 17 AI-specific reporting items (made of 28 subitems) and ten generic reporting items, with an E&E paragraph provided for each. Through consultation and consensus with a range of stakeholders, we developed a guideline comprising key items that should be reported in early-stage clinical studies of AI-based decision support systems in healthcare. By providing an actionable checklist of minimal reporting items, the DECIDE-AI guideline will facilitate the appraisal of these studies and replicability of their findings.
Publisher: Springer Science and Business Media LLC
Date: 15-05-2019
Publisher: Springer Science and Business Media LLC
Date: 12-08-2022
Publisher: Cold Spring Harbor Laboratory
Date: 08-11-2021
DOI: 10.1101/2021.11.01.21264875
Abstract: In the United Kingdom, hospital patients suffer preventable deaths ( failure to rescue ) and delayed admission to the Intensive Care Unit because of poor illness recognition. This problem has consistently been identified in care reviews. Strategies to improve deteriorating ward patient care, such as early warning systems and specialist care teams (Critical Care Outreach or Rapid Response), have not reliably demonstrated reductions to patient deaths. Current research focuses on failure to rescue , but further reductions to patient deaths are possible, by examining care of unwell hospital patients who are rescued (successfully treated). Our primary objective is to develop a framework of care escalation success factors that can be developed into a complex intervention to reduce patient mortality and unnecessary admissions to the Intensive Care Unit (ICU). SUFFICE is a multicentre mixed-methods, exploratory sequential study examining rescue events in the acutely unwell ward patient in two National Health Service Trusts with Teaching Hospital status. The study will constitute four key phases. Firstly, we will observe ward care escalation events to generate a theoretical understanding of the process of rescue. Secondly, we will review care records from unwell ward patients in whom an ICU admission was avoided to identify care success factors. Thirdly, we will conduct staff interviews with expert doctors, nurses, and Allied Health Professionals to identify how rescue is achieved and further explore care escalation success factors identified in the first two study phases. The final phase involves integrating the study data to generate the theoretical basis for the framework of care escalation success factors. Ethical approval has been obtained through the Queen Square London Research and Ethics committee (REC Ref 20/HRA/3828 CAG-20CAG0106). Study results will be of interest to critical care, nursing and medical professions and results will be disseminated at national and international conferences. ISRCTN 38850
Publisher: BMJ
Date: 09-2019
DOI: 10.1136/BMJOPEN-2019-032429
Abstract: Traditional early warning scores (EWSs) use vital sign derangements to detect clinical deterioration in patients treated on hospital wards. Combining vital signs with demographics and laboratory results improves EWS performance. We have developed the Hospital Alerting Via Electronic Noticeboard (HAVEN) system. HAVEN uses vital signs, as well as demographic, comorbidity and laboratory data from the electronic patient record, to quantify and rank the risk of unplanned admission to an intensive care unit (ICU) within 24 hours for all ward patients. The primary aim of this study is to find additional variables, potentially missed during development, which may improve HAVEN performance. These variables will be sought in the medical record of patients misclassified by the HAVEN risk score during testing. This will be a prospective, observational, cohort study conducted at the John Radcliffe Hospital, part of the Oxford University Hospitals NHS Foundation Trust in the UK. Each day during the study periods, we will document all highly ranked patients (ie, those with the highest risk for unplanned ICU admission) identified by the HAVEN system. After 48 hours, we will review the progress of the identified patients. Patients who were subsequently admitted to the ICU will be removed from the study (as they will have been correctly classified by HAVEN). Highly ranked patients not admitted to ICU will undergo a structured medical notes review. Additionally, at the end of the study periods, all patients who had an unplanned ICU admission but whom HAVEN failed to rank highly will have a structured medical notes review. The review will identify candidate variables, likely associated with unplanned ICU admission, not included in the HAVEN risk score. Approval has been granted for gathering the data used in this study from the South Central Oxford C Research Ethics Committee (16/SC/0264, 13 June 2016) and the Confidentiality Advisory Group (16/CAG/0066). Our study will use a clinical expert conducting a structured medical notes review to identify variables, associated with unplanned ICU admission, not included in the development of the HAVEN risk score. These variables will then be added to the risk score and evaluated for potential performance gain. To the best of our knowledge, this is the first study of this type. We anticipate that documenting the HAVEN development methods will assist other research groups developing similar technology. NCT12518261
Publisher: Wiley
Date: 15-09-2023
DOI: 10.1111/TME.12994
Publisher: Oxford University Press (OUP)
Date: 06-07-2022
Abstract: New-onset atrial fibrillation (NOAF) is common in patients treated on an intensive care unit (ICU), but the long-term impacts on patient outcomes are unclear. We compared national hospital and long-term outcomes of patients who developed NOAF in ICU with those who did not, before and after adjusting for comorbidities and ICU admission factors. Using the RISK-II database (Case Mix Programme national clinical audit of adult intensive care linked with Hospital Episode Statistics and mortality data), we conducted a retrospective cohort study of 4615 patients with NOAF and 27 690 matched controls admitted to 248 adult ICUs in England, from April 2009 to March 2016. We examined in-hospital mortality hospital readmission with atrial fibrillation (AF), heart failure, and stroke up to 6 years post discharge and mortality up to 8 years post discharge. Compared with controls, patients who developed NOAF in the ICU were at a higher risk of in-hospital mortality [unadjusted odds ratio (OR) 3.22, 95% confidence interval (CI) 3.02–3.44], only partially explained by patient demographics, comorbidities, and ICU admission factors (adjusted OR 1.50, 95% CI 1.38–1.63). They were also at a higher risk of subsequent hospitalization with AF [adjusted cause-specific hazard ratio (aCHR) 5.86, 95% CI 5.33–6.44], stroke (aCHR 1.47, 95% CI 1.12–1.93), and heart failure (aCHR 1.28, 95% CI 1.14–1.44) independent of pre-existing comorbidities. Patients who develop NOAF during an ICU admission are at a higher risk of in-hospital death and readmissions to hospital with AF, heart failure, and stroke than those who do not.
Publisher: Springer Science and Business Media LLC
Date: 05-09-2020
Publisher: Elsevier BV
Date: 04-2023
Publisher: National Institute for Health and Care Research
Date: 11-2021
DOI: 10.3310/HTA25710
Abstract: New-onset atrial fibrillation occurs in around 10% of adults treated in an intensive care unit. New-onset atrial fibrillation may lead to cardiovascular instability and thromboembolism, and has been independently associated with increased length of hospital stay and mortality. The long-term consequences are unclear. Current practice guidance is based on patients outside the intensive care unit however, new-onset atrial fibrillation that develops while in an intensive care unit differs in its causes and the risks and clinical effectiveness of treatments. The lack of evidence on new-onset atrial fibrillation treatment or long-term outcomes in intensive care units means that practice varies. Identifying optimal treatment strategies and defining long-term outcomes are critical to improving care. In patients treated in an intensive care unit, the objectives were to (1) evaluate existing evidence for the clinical effectiveness and safety of pharmacological and non-pharmacological new-onset atrial fibrillation treatments, (2) compare the use and clinical effectiveness of pharmacological and non-pharmacological new-onset atrial fibrillation treatments, and (3) determine outcomes associated with new-onset atrial fibrillation. We undertook a scoping review that included studies of interventions for treatment or prevention of new-onset atrial fibrillation involving adults in general intensive care units. To investigate the long-term outcomes associated with new-onset atrial fibrillation, we carried out a retrospective cohort study using English national intensive care audit data linked to national hospital episode and outcome data. To analyse the clinical effectiveness of different new-onset atrial fibrillation treatments, we undertook a retrospective cohort study of two large intensive care unit databases in the USA and the UK. Existing evidence was generally of low quality, with limited data suggesting that beta-blockers might be more effective than amiodarone for converting new-onset atrial fibrillation to sinus rhythm and for reducing mortality. Using linked audit data, we showed that patients developing new-onset atrial fibrillation have more comorbidities than those who do not. After controlling for these differences, patients with new-onset atrial fibrillation had substantially higher mortality in hospital and during the first 90 days after discharge (adjusted odds ratio 2.32, 95% confidence interval 2.16 to 2.48 adjusted hazard ratio 1.46, 95% confidence interval 1.26 to 1.70, respectively), and higher rates of subsequent hospitalisation with atrial fibrillation, stroke and heart failure (adjusted cause-specific hazard ratio 5.86, 95% confidence interval 5.33 to 6.44 adjusted cause-specific hazard ratio 1.47, 95% confidence interval 1.12 to 1.93 and adjusted cause-specific hazard ratio 1.28, 95% confidence interval 1.14 to 1.44, respectively), than patients who did not have new-onset atrial fibrillation. From intensive care unit data, we found that new-onset atrial fibrillation occurred in 952 out of 8367 (11.4%) UK and 1065 out of 18,559 (5.7%) US intensive care unit patients in our study. The median time to onset of new-onset atrial fibrillation in patients who received treatment was 40 hours, with a median duration of 14.4 hours. The clinical characteristics of patients developing new-onset atrial fibrillation were similar in both databases. New-onset atrial fibrillation was associated with significant average reductions in systolic blood pressure of 5 mmHg, despite significant increases in vasoactive medication (vasoactive-inotropic score increase of 2.3 p 0.001). After adjustment, intravenous beta-blockers were not more effective than amiodarone in achieving rate control (adjusted hazard ratio 1.14, 95% confidence interval 0.91 to 1.44) or rhythm control (adjusted hazard ratio 0.86, 95% confidence interval 0.67 to 1.11). Digoxin therapy was associated with a lower probability of achieving rate control (adjusted hazard ratio 0.52, 95% confidence interval 0.32 to 0.86) and calcium channel blocker therapy was associated with a lower probability of achieving rhythm control (adjusted hazard ratio 0.56, 95% confidence interval 0.39 to 0.79) than amiodarone. Findings were consistent across both the combined and the in idual database analyses. Existing evidence for new-onset atrial fibrillation management in intensive care unit patients is limited. New-onset atrial fibrillation in these patients is common and is associated with significant short- and long-term complications. Beta-blockers and amiodarone appear to be similarly effective in achieving cardiovascular control, but digoxin and calcium channel blockers appear to be inferior. Our findings suggest that a randomised controlled trial of amiodarone and beta-blockers for management of new-onset atrial fibrillation in critically ill patients should be undertaken. Studies should also be undertaken to provide evidence for or against anticoagulation for patients who develop new-onset atrial fibrillation in intensive care units. Finally, given that readmission with heart failure and thromboembolism increases following an episode of new-onset atrial fibrillation while in an intensive care unit, a prospective cohort study to demonstrate the incidence of atrial fibrillation and/or left ventricular dysfunction at hospital discharge and at 3 months following the development of new-onset atrial fibrillation should be undertaken. Current Controlled Trials ISRCTN13252515. This project was funded by the National Institute for Health Research (NIHR) Health Technology Assessment programme and will be published in full in Health Technology Assessment Vol. 25, No. 71. See the NIHR Journals Library website for further project information.
Publisher: Informa UK Limited
Date: 09-08-2018
DOI: 10.1080/10803548.2017.1336299
Abstract: Musculoskeletal symptoms related to using traditional computer workstations are common. Quantitative methods for measuring muscle stress and strain are needed to improve ergonomics of workstations. We hypothesize that infrared thermography (IRT) is suited for this purpose. This hypothesis was evaluated by estimating muscle activity in upright and traditional working postures with IRT and surface electromyography (sEMG). IRT and sEMG measurements were conducted in 14 female participants with both working postures. First, measurements with the traditional posture were performed. Later, participants had 1 month to adjust to the upright working posture before repeating the measurements. IRT images were acquired before and after a full working day, with sEMG recordings being conducted throughout the measurement days. Participants evaluated their neck pain severity using neck disability index (NDI) questionnaires before the first and after the second measurement day. Spatial variation in upper back temperature was higher (p = 0.008) when working in traditional posture and the upright working posture reduced (p < 0.05) upper back muscle activity. The NDI was significantly lower (p = 0.003) after working in the upright posture. IRT was found suitable for evaluating muscle activity and upright working posture to reduce the NDI and muscle activity in the upper back.
Publisher: Elsevier BV
Date: 06-2019
Publisher: Elsevier BV
Date: 06-2019
Publisher: National Institute for Health and Care Research
Date: 02-2022
DOI: 10.3310/ZXHI9396
Abstract: In the UK, 10% of admissions to intensive care units receive continuous renal replacement therapy with regional citrate anticoagulation replacing systemic heparin anticoagulation over the last decade. Regional citrate anticoagulation is now used in 50% of intensive care units, despite little evidence of safety or effectiveness. The aim of the Renal Replacement Anticoagulant Management study was to evaluate the clinical and health economic impacts of intensive care units moving from systemic heparin anticoagulation to regional citrate anticoagulation for continuous renal replacement therapy. This was an observational comparative effectiveness study. The setting was NHS adult general intensive care units in England and Wales. Participants were adults receiving continuous renal replacement therapy in an intensive care unit participating in the Intensive Care National Audit & Research Centre Case Mix Programme national clinical audit between 1 April 2009 and 31 March 2017. Exposure – continuous renal replacement therapy in an intensive care unit after completion of transition to regional citrate anticoagulation. Comparator – continuous renal replacement therapy in an intensive care unit before starting transition to regional citrate anticoagulation or had not transitioned. Primary effectiveness – all-cause mortality at 90 days. Primary economic – incremental net monetary benefit at 1 year. Secondary outcomes – mortality at hospital discharge, 30 days and 1 year days of renal, cardiovascular and advanced respiratory support in intensive care unit length of stay in intensive care unit and hospital bleeding and thromboembolic events prevalence of end-stage renal disease at 1 year and estimated lifetime incremental net monetary benefit. In idual patient data from the Intensive Care National Audit & Research Centre Case Mix Programme were linked with the UK Renal Registry, Hospital Episode Statistics (for England), Patient Episodes Data for Wales and Civil Registrations (Deaths) data sets, and combined with identified periods of systemic heparin anticoagulation and regional citrate anticoagulation (survey of intensive care units). Staff time and consumables were obtained from micro-costing. Continuous renal replacement therapy system failures were estimated from the Post-Intensive Care Risk-adjusted Alerting and Monitoring data set. EuroQol-3 Dimensions, three-level version, health-related quality of life was obtained from the Intensive Care Outcomes Network study. Out of the 188 (94.9%) units that responded to the survey, 182 (96.8%) use continuous renal replacement therapy. After linkage, data were available from 69,001 patients across 181 intensive care units (60,416 during periods of systemic heparin anticoagulation use and 8585 during regional citrate anticoagulation use). The change to regional citrate anticoagulation was not associated with a step change in 90-day mortality (odds ratio 0.98, 95% confidence interval 0.89 to 1.08). Secondary outcomes showed step increases in days of renal support (difference in means 0.53 days, 95% confidence interval 0.28 to 0.79 days), advanced cardiovascular support (difference in means 0.23 days, 95% confidence interval 0.09 to 0.38 days) and advanced respiratory support (difference in means, 0.53 days, 95% CI 0.03 to 1.03 days) with a trend toward fewer bleeding episodes (odds ratio 0.90, 95% confidence interval 0.76 to 1.06) with transition to regional citrate anticoagulation. The micro-costing study indicated that regional citrate anticoagulation was more expensive and was associated with an estimated incremental net monetary loss (step change) of –£2376 (95% confidence interval –£3841 to –£911). The estimated likelihood of cost-effectiveness at 1 year was less than 0.1%. Lack of patient-level treatment data means that the results represent average effects of changing to regional citrate anticoagulation in intensive care units. Administrative data are subject to variation in data quality over time, which may contribute to observed trends. The introduction of regional citrate anticoagulation has not improved outcomes for patients and is likely to have substantially increased costs. This study demonstrates the feasibility of evaluating effects of changes in practice using routinely collected data. (1) Prioritise other changes in clinical practice for evaluation and (2) methodological research to understand potential implications of trends in data quality. This trial is registered as ClinicalTrials.gov NCT03545750. This project was funded by the National Institute for Health Research (NIHR) Health Technology Assessment programme and will be published in full in Health Technology Assessment Vol. 26, No. 13. See the NIHR Journals Library website for further project information.
Publisher: Elsevier BV
Date: 02-2018
Abstract: We aimed to quantify the degree to which students pre-gamed in New Zealand, using self-report and breathalysers. A total of 569 New Zealand undergraduate students were interviewed (men = 45.2% first year = 81.4%) entering three university-run concerts. We asked participants to report how many drinks they had consumed, their self-reported intoxication and the duration of their pre-gaming session. We then recorded participants' Breath Alcohol Concentration (BrAC µg/L) and the time they arrived at the event. The number of participants who reported consuming alcohol before the event was 504 (88.6%) and the number of standard drinks consumed was high (M=6.9 median=6.0). A total of 237 (41.7%) participants could not have their BrAC recorded due to having consumed alcohol ≤10 minutes before the interview. The remaining 332 participants (57.3%) recorded a mean BrAC of 288.8µg/L (median=280.0 µg/L). Gender, off-c us accommodation, length of pre-gaming drinking session, and time of arrival at the event were all associated with increased pre-gaming. Conclusion and implications for public health: Pre-gaming was the norm for students. Universities must take pre-gaming into account policy implications include earlier start times of events and limiting students' access to alcohol prior to events.
Publisher: Wiley
Date: 18-03-2016
DOI: 10.1111/JAN.12959
Abstract: To identify the practical challenges encountered when using wearable monitors for patients discharged from the intensive care unit. Patients discharged from intensive care units are a high-risk group that might benefit from continuing observation using 'wearable' monitors to enable faster identification of physiological deterioration and facilitate timely clinical action. This area of technological innovation is of key interest to nurses who manage this group of patients. A prospective observational study. An observational study conducted in 2013-2014 used wearable monitors to record continuous observations for patients discharged from an intensive care unit to develop a predictive model of patients likely to deteriorate. Screening data for study eligibility and case report form data to assess monitor tolerance and comfort were collected daily and analysed using Microsoft Access. Patients (n = 2704) were discharged from an intensive care unit during the study, 208 consented to wearing the monitor. Of the 192 included in analysis, 130 (67·7%) removed the monitor before the trial finished. Reasons cited for removal included 'discomfort and irritation' 61 (31·8%) and 'feeling too unwell' 8 (4·2%). Five hundred seventeen patients were screened following adaption of the wearable monitor. Despite design changes, 56 (10·8%) patients were unable to wear monitors for reasons related to their anatomy or condition. Of 124 patients, 65 patients (52·4%) who were approached refused participation. Work is needed to understand wireless monitor comfort and design for acutely unwell patients. Product design needs to develop further, so patients are catered for in flexibility of monitor placement and improved comfort for long-term wear.
Publisher: Elsevier BV
Date: 03-2022
Publisher: Springer Science and Business Media LLC
Date: 25-10-2021
DOI: 10.1038/S41591-021-01556-7
Abstract: Emerging reports of rare neurological complications associated with COVID-19 infection and vaccinations are leading to regulatory, clinical and public health concerns. We undertook a self-controlled case series study to investigate hospital admissions from neurological complications in the 28 days after a first dose of ChAdOx1nCoV-19 ( n = 20,417,752) or BNT162b2 ( n = 12,134,782), and after a SARS-CoV-2-positive test ( n = 2,005,280). There was an increased risk of Guillain–Barré syndrome (incidence rate ratio (IRR), 2.90 95% confidence interval (CI): 2.15–3.92 at 15–21 days after vaccination) and Bell’s palsy (IRR, 1.29 95% CI: 1.08–1.56 at 15–21 days) with ChAdOx1nCoV-19. There was an increased risk of hemorrhagic stroke (IRR, 1.38 95% CI: 1.12–1.71 at 15–21 days) with BNT162b2. An independent Scottish cohort provided further support for the association between ChAdOx1nCoV and Guillain–Barré syndrome (IRR, 2.32 95% CI: 1.08–5.02 at 1–28 days). There was a substantially higher risk of all neurological outcomes in the 28 days after a positive SARS-CoV-2 test including Guillain–Barré syndrome (IRR, 5.25 95% CI: 3.00–9.18). Overall, we estimated 38 excess cases of Guillain–Barré syndrome per 10 million people receiving ChAdOx1nCoV-19 and 145 excess cases per 10 million people after a positive SARS-CoV-2 test. In summary, although we find an increased risk of neurological complications in those who received COVID-19 vaccines, the risk of these complications is greater following a positive SARS-CoV-2 test.
Location: United Kingdom of Great Britain and Northern Ireland
Location: United Kingdom of Great Britain and Northern Ireland
No related grants have been discovered for Peter Watkinson.