ORCID Profile
0000-0001-7564-073X
Current Organisations
RACGP
,
Bond University
,
University of Queensland
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
In Research Link Australia (RLA), "Research Topics" refer to ANZSRC FOR and SEO codes. These topics are either sourced from ANZSRC FOR and SEO codes listed in researchers' related grants or generated by a large language model (LLM) based on their publications.
Neural, Evolutionary and Fuzzy Computation | Pattern Recognition and Data Mining | Information and Computing Sciences not elsewhere classified | Artificial Intelligence and Image Processing
Environmentally Sustainable Information and Communication Services not elsewhere classified | Social Ethics | Urban and Industrial Water Management |
Publisher: BMJ
Date: 04-2014
Publisher: Springer Science and Business Media LLC
Date: 07-06-2016
Publisher: American Medical Association (AMA)
Date: 24-03-2010
Abstract: Theory and simulation suggest that randomized controlled trials (RCTs) stopped early for benefit (truncated RCTs) systematically overestimate treatment effects for the outcome that precipitated early stopping. To compare the treatment effect from truncated RCTs with that from meta-analyses of RCTs addressing the same question but not stopped early (nontruncated RCTs) and to explore factors associated with overestimates of effect. Search of MEDLINE, EMBASE, Current Contents, and full-text journal content databases to identify truncated RCTs up to January 2007 search of MEDLINE, Cochrane Database of Systematic Reviews, and Database of Abstracts of Reviews of Effects to identify systematic reviews from which in idual RCTs were extracted up to January 2008. Selected studies were RCTs reported as having stopped early for benefit and matching nontruncated RCTs from systematic reviews. Independent reviewers with medical content expertise, working blinded to trial results, judged the eligibility of the nontruncated RCTs based on their similarity to the truncated RCTs. Reviewers with methodological expertise conducted data extraction independently. The analysis included 91 truncated RCTs asking 63 different questions and 424 matching nontruncated RCTs. The pooled ratio of relative risks in truncated RCTs vs matching nontruncated RCTs was 0.71 (95% confidence interval, 0.65-0.77). This difference was independent of the presence of a statistical stopping rule and the methodological quality of the studies as assessed by allocation concealment and blinding. Large differences in treatment effect size between truncated and nontruncated RCTs (ratio of relative risks <0.75) occurred with truncated RCTs having fewer than 500 events. In 39 of the 63 questions (62%), the pooled effects of the nontruncated RCTs failed to demonstrate significant benefit. Truncated RCTs were associated with greater effect sizes than RCTs not stopped early. This difference was independent of the presence of statistical stopping rules and was greatest in smaller studies.
Publisher: Public Library of Science (PLoS)
Date: 16-07-2020
Publisher: JMIR Publications Inc.
Date: 02-08-2020
Abstract: imely and effective contact tracing is an essential public health measure for curbing the transmission of COVID-19. App-based contact tracing has the potential to optimize the resources of overstretched public health departments. However, its efficiency is dependent on widespread adoption. his study aimed to investigate the uptake of the Australian Government’s COVIDSafe app among Australians and examine the reasons why some Australians have not downloaded the app. n online national survey, with representative quotas for age and gender, was conducted between May 8 and May 11, 2020. Participants were excluded if they were a health care professional or had been tested for COVID-19. f the 1802 potential participants contacted, 289 (16.0%) were excluded prior to completing the survey, 13 (0.7%) declined, and 1500 (83.2%) participated in the survey. Of the 1500 survey participants, 37.3% (n=560) had downloaded the COVIDSafe app, 18.7% (n=280) intended to do so, 27.7% (n=416) refused to do so, and 16.3% (n=244) were undecided. Equally proportioned reasons for not downloading the app included privacy (165/660, 25.0%) and technical concerns (159/660, 24.1%). Other reasons included the belief that social distancing was sufficient and the app was unnecessary (111/660, 16.8%), distrust in the government (73/660, 11.1%), and other miscellaneous responses (eg, apathy and following the decisions of others) (73/660, 11.1%). In addition, knowledge about COVIDSafe varied among participants, as some were confused about its purpose and capabilities. or the COVIDSafe app to be accepted by the public and used correctly, public health messages need to address the concerns of citizens, specifically privacy, data storage, and technical capabilities. Understanding the specific barriers preventing the uptake of contact tracing apps provides the opportunity to design targeted communication strategies aimed at strengthening public health initiatives, such as downloading and correctly using contact tracing apps.
Publisher: CMA Joule Inc.
Date: 17-11-2015
DOI: 10.1503/CMAJ.140848
Publisher: Springer Science and Business Media LLC
Date: 09-05-2018
DOI: 10.1038/S41746-018-0021-9
Abstract: Mobile health apps aimed towards patients are an emerging field of mHealth. Their potential for improving self-management of chronic conditions is significant. Here, we propose a concept of “prescribable” mHealth apps, defined as apps that are currently available, proven effective, and preferably stand-alone, i.e., that do not require dedicated central servers and continuous monitoring by medical professionals. Our objectives were to conduct an overview of systematic reviews to identify such apps, assess the evidence of their effectiveness, and to determine the gaps and limitations in mHealth app research. We searched four databases from 2008 onwards and the Journal of Medical Internet Research for systematic reviews of randomized controlled trials (RCTs) of stand-alone health apps. We identified 6 systematic reviews including 23 RCTs evaluating 22 available apps that mostly addressed diabetes, mental health and obesity. Most trials were pilots with small s le size and of short duration. Risk of bias of the included reviews and trials was high. Eleven of the 23 trials showed a meaningful effect on health or surrogate outcomes attributable to apps. In conclusion, we identified only a small number of currently available stand-alone apps that have been evaluated in RCTs. The overall low quality of the evidence of effectiveness greatly limits the prescribability of health apps. mHealth apps need to be evaluated by more robust RCTs that report between-group differences before becoming prescribable. Systematic reviews should incorporate sensitivity analysis of trials with high risk of bias to better summarize the evidence, and should adhere to the relevant reporting guideline.
Publisher: Wiley
Date: 11-1990
Abstract: We present a technique, quality adjusted survival analysis, for the analysis of controlled trials where patients may experience several health states which differ in their quality of life. When the data are censored, a survival analysis of the quality adjusted life years achieved may involve informative censoring, and produce biased estimates. To overcome this, we partition the survival curve the resulting areas, which represent the mean time in each state, are multiplied by utility weights to provide an unbiased estimate of (restricted) quality adjusted survival. If the appropriate weights are in doubt, the results are best presented as a threshold analysis over the utility weights, allowing in idual recommendation to be read from a simple graph. The certainty of the conclusions can be presented as confidence bands on the threshold line. The techniques are illustrated with a re-analysis of a large three-arm trial of adjuvant chemoendocrine therapy for stage II breast cancer in postmenopausal women. This shows that if the value of time spent in toxicity is greater than the time spent in relapse, we can be 95 per cent confident that chemoendocrine therapy is the preferred option.
Publisher: Springer Science and Business Media LLC
Date: 28-04-2016
DOI: 10.1038/BJC.2016.90
Publisher: FapUNIFESP (SciELO)
Date: 19-12-2015
DOI: 10.1590/1516-3180.2013.8040011
Abstract: CONTEXT AND OBJECTIVE: The current paradigm of science is to accumulate as much research data as possible, with less thought given to navigation or synthesis of the resulting mass, which h ers locating and using the research. The aim here was to describe the number of randomized controlled trials (RCTs) and systematic reviews (SRs) focusing on exercise, and their journal sources, that have been indexed in PubMed over time. DESIGN AND SETTING: Descriptive study conducted at Bond University, Australia. METHOD: To find RCTs, a search was conducted in PubMed Clinical Queries, using the category "Therapy" and the Medical Subject Headings (MeSH) term "Exercise". To find SRs, a search was conducted in PubMed Clinical Queries, using the category "Therapy", the MeSH term "Exercise" and various methodological filters. RESULTS: Up until 2011, 9,354 RCTs about exercise were published in 1,250 journals and 1,262 SRs in 513 journals. Journals in the area of Sports Science published the greatest number of RCTs and journals categorized as belonging to "Other health professions" area (for ex le nursing or psychology) published the greatest number of SRs. The Cochrane Database of Systematic Reviews was the principal source for SRs, with 9.8% of the total, while the Journal of Strength and Conditioning Research and Medicine & Science in Sports & Exercise published 4.4% and 5.0% of the RCTs, respectively. CONCLUSIONS: The rapid growth and resulting scatter of RCTs and SRs on exercise presents challenges for locating and using this research. Solutions for this issue need to be considered.
Publisher: Ovid Technologies (Wolters Kluwer Health)
Date: 04-2009
Publisher: Elsevier BV
Date: 09-1995
DOI: 10.1016/S0002-9149(99)80133-7
Abstract: LIPID is a multicenter, double-blind, randomized, placebo-controlled trial comparing the effects of pravastatin, 40 mg/day, with placebo, given for > or = 5 years, in patients aged 31 to 75 years with a total cholesterol level at baseline of 4.0 to 7.0 mmol/L (155 to 270 mg/dl), and with a history of acute myocardial infarction (AMI) or hospitalization for unstable angina pectoris (UAP). Each group receives dietary advice according to National Heart Foundation guidelines. In idual care of each patient is otherwise left to the discretion of the patient's usual doctor. The study has a primary outcome of coronary mortality, and is designed to detect an 18% reduction with 80% power. From April 1990 to September 1992, 11,106 patients were registered, and following the run-in phase, 9,014 were randomized: 5,754 (64%) after a qualifying event of AMI and 3,260 (36%) after hospitalization for UAP. The randomized population includes relatively large numbers in subgroups not assessed reliably in earlier trials: 1,511 women, 3,516 patients aged > or = 65 years, 777 diabetics, and 3,829 patients with serum cholesterol < or = 5.5 mmol/L (213 mg/dl) at baseline. With a projected 700 fatal coronary events, the trial should be able to detect important reductions in coronary mortality and contribute substantially to prospective meta-analyses to detect effects on total mortality. The spectrum of patients being assessed will improve the reliability of evidence for the benefits and risks of cholesterol-lowering therapies in patients with lower cholesterol levels and in other important subgroups.
Publisher: Elsevier BV
Date: 11-2014
DOI: 10.1016/J.JCLINEPI.2014.07.004
Abstract: To develop a framework to identify and classify interactions within and among treatments and conditions and to test this framework with guidelines on chronic heart failure (CHF) and its frequent comorbidity. Text analysis of evidence-based clinical practice guidelines on CHF and 18 conditions co-occurring in ≥5% of CHF patients (2-4 guidelines per disease). We extracted data on interactions between CHF and comorbidity and key recommendations on diagnostic and therapeutic management. From a subset of data, we derived 13 subcategories within disease-disease (Di-Di-I), disease-drug (Di-D-I), drug-drug interactions (DDI) and synergistic treatments. We classified the interactions and tested the interrater reliability, refined the framework, and agreed on the matrix of interactions. We included 48 guidelines two-thirds provided information about comorbidity. In total, we identified N = 247 interactions (on average, 14 per comorbidity): 68 were Di-Di-I, 115 were Di-D-I, 12 were DDI, and 52 were synergisms. All 18 comorbidities contributed at least one interaction. The interaction matrix provides a structure to present different types of interactions between an index disease and comorbidity. Guideline developers may consider the matrix to support clinical decision making in multimorbidity. Further research is needed to show its relevance to improve guidelines and health outcomes.
Publisher: American Medical Association (AMA)
Date: 27-02-2012
Publisher: Elsevier BV
Date: 06-2019
Publisher: BMJ
Date: 02-2005
DOI: 10.1136/EBM.10.1.4-A
Publisher: Oxford University Press (OUP)
Date: 1997
DOI: 10.1093/JNCIMONO/1997.22.73
Abstract: Using MEDLINE and the bibliographies of retrieved articles and reviews, we identified and systematically reviewed the quality and results of all randomized trials of mammographic screening that included women less than 50 years of age. Eight randomized trials were identified, 7 of which included women less than 50. Identified trials were assessed for the following design features: (a) method of randomization, (b) documented comparability of baseline data, (c) standardized criteria for breast cancer death, (d) blinded review of cause of death, (e) completeness of follow-up, and (f) use of an "intention to treat analysis." The quality of trials was generally high, with a total of almost 160,000 women randomized. In women aged 40-49 at entry, the overall, absolute risk difference between those invited and those not was 0.0004 (95% CI: 0 to 0.0009). Yet, what does this mean to a 40-year-old women considering screening? If 10,000 women aged 40-49 years were screened regularly, then after a decade there would be about 4 less breast cancer deaths? Is that worthwhile? This is a difficult question, and it needs to be weighed against the problems arising from false positives and ductal carcinoma in situ. We recommend that women in this age group intending to be screened should be fully informed of these results in terms of absolute benefit.
Publisher: BMJ
Date: 30-07-2018
DOI: 10.1136/BMJ.K3229
Publisher: Cold Spring Harbor Laboratory
Date: 15-07-2020
DOI: 10.1101/2020.07.13.20153163
Abstract: Accurate seroprevalence estimates of SARS-CoV-2 in different populations could clarify the extent to which current testing strategies are identifying all active infection, and hence the true magnitude and spread of the infection. Our primary objective was to identify valid seroprevalence studies of SARS-CoV-2 infection and compare their estimates with the reported, and imputed, COVID-19 case rates within the same population at the same time point. We searched PubMed, Embase, the Cochrane COVID-19 trials, and Europe-PMC for published studies and pre-prints that reported anti-SARS-CoV-2 IgG, IgM and/or IgA antibodies for serosurveys of the general community from 1 Jan to 12 Aug 2020. Of the 2199 studies identified, 170 were assessed for full text and 17 studies representing 15 regions and 118,297 subjects were includable. The seroprevalence proportions in 8 studies ranged between 1%-10%, with 5 studies under 1%, and 4 over 10% - from the notably hard-hit regions of Gangelt, Germany Northwest Iran Buenos Aires, Argentina and Stockholm, Sweden. For seropositive cases who were not previously identified as COVID-19 cases, the majority had prior COVID-like symptoms. The estimated seroprevalences ranged from 0.56-717 times greater than the number of reported cumulative cases – half of the studies reported greater than 10 times more SARS-CoV-2 infections than the cumulative number of cases. The findings show SARS-CoV-2 seroprevalence is well below “herd immunity” in all countries studied. The estimated number of infections, however, were much greater than the number of reported cases and deaths in almost all locations. The majority of seropositive people reported prior COVID-like symptoms, suggesting that undertesting of symptomatic people may be causing a substantial under-ascertainment of SARS-CoV-2 infections. Systematic assessment of 17-country data show SARS-CoV-2 seroprevalence is mostly less than 10% - levels well below “herd immunity”. High symptom rates in seropositive cases suggest undertesting of symptomatic people and could explain gaps between seroprevalence rates and reported cases. The estimated number of infections for majority of the studies ranged from 2-717 times greater than the number of reported cases in that region and up to 13 times greater than the cases imputed from number of reported deaths.
Publisher: American Medical Association (AMA)
Date: 06-2022
Publisher: Wiley
Date: 19-11-2014
Publisher: Wiley
Date: 24-01-2007
Publisher: John Wiley & Sons, Ltd
Date: 21-01-2009
Publisher: John Wiley & Sons, Ltd
Date: 18-10-2004
Publisher: BMJ
Date: 14-11-1998
Publisher: Elsevier BV
Date: 05-2023
Publisher: SAGE Publications
Date: 08-1988
DOI: 10.1177/0272989X8800800311
Abstract: We describe Brucella sp. infection and associated lesions in a harbor porpoise (Phocoena phocoena) found on the coast of Belgium. The infection was diagnosed by immunohistochemistry, transmission electron microscopy, and bacteriology, and the organism was identified as B. ceti. The infection's location in the porpoise raises questions of abortion and zoonotic risks.
Publisher: BMJ
Date: 09-2004
Publisher: Elsevier BV
Date: 2014
Publisher: AMPCo
Date: 05-1995
Publisher: American College of Physicians
Date: 05-02-2019
DOI: 10.7326/M18-2645
Publisher: Elsevier BV
Date: 1995
DOI: 10.1016/0895-4356(94)00099-C
Abstract: Meta-analyses of diagnostic test accuracy are uncommon and often based on separate pooling of sensitivity and specificity, which can lead to biased estimates. Recently, several appropriate methods have been developed for meta-analysing diagnostic test data from primary studies. Primary studies usually only provide binary test data, for which Moses et al. have developed a method to estimate Summary Receiver Operating Characteristic Curves, thereby taking account of possible test threshold differences between studies. Several methods are also available for analysing multicategory and continuous test data. The usefulness of applying these methods is constrained by publication bias and the generally poor quality of primary studies of diagnostic test accuracy. Meta-analysts need to highlight important defects in quality and how they affect summary estimates to ensure that better primary studies are available for meta-analysis in the future.
Publisher: Royal College of General Practitioners
Date: 06-2009
Publisher: Public Library of Science (PLoS)
Date: 03-06-2015
Publisher: Elsevier BV
Date: 10-1997
DOI: 10.1016/S0197-2456(97)00011-1
Abstract: The Long-term Intervention with Pravastatin in Ischemic Heart Disease (LIPID) trial is a double-blind, randomized, placebo-controlled trial evaluating the long-term effect of pravastatin on coronary mortality in patients with a previous myocardial infarction or unstable angina-ischemic heart disease (IHD). It is planned to run for at least five years with 9014 patients from 85 centers in Australia and New Zealand. The trial will monitor cause-specific mortality and major clinical events associated with each treatment. Running in parallel with the main study is a prospective economic analysis, the objectives of which are (1) to estimate the effectiveness of pravastatin compared with placebo in terms of survival, quality of life (QOL), and quality-adjusted life-years (QALY) (2) to estimate the resource usage associated with pravastatin compared with placebo-in particular, to study whether it alters resource usage through prevention of disease progression and (3) to use this information for a cost-utility analysis with cost per quality-adjusted life-year as the unit of analysis. A novel aspect of the design is the use of a preliminary cost-effectiveness analysis, based on "best-guess" values, and a sensitivity analysis over plausible ranges to guide the choice of subs le size. Some data, such a mortality, days spent in hospital, major clinical events, and drug use, are being collected within the main LIPID trial. However, additional subs les for the cost-effectiveness study will include information on quality of life, time off work, and resources used, such as time in hospital, procedures, and medications taken. The methods and s le sizes for these substudies have been a crucial issue in validity and feasibility.
Publisher: Royal College of General Practitioners
Date: 15-02-2021
Abstract: Autoinflation balloons are used to treat patients with otitis media with effusion (OME) to help avoid surgery. To compare the ability of party balloons with Otovent balloons to produce sufficient pressure for a Valsalva manoeuvre. Pressure testing was used to determine the number of times each balloon could produce pressures sufficient for a Valsalva manoeuvre. Subsequently, Otovent balloons were compared with spherical party balloons in a pilot clinical trial of 12 healthy adults. Each balloon was inflated 20 times and the maximum pressure was recorded. Three balloons of each type were tested to 50 inflations to assess pressures over persistent use. Otovent balloons’ mean inflation pressure was 93 mmHg (95% confidence interval [CI] = 89 to 97 mmHg) on first inflation, dropping to 83 mmHg (95% CI = 80 to 86 mmHg) after 20 inflations. Two types of spherical party balloon required mean inflation pressures of 84 mmHg (95% CI = 77 to 90 mmHg) and 108 mmHg (95% CI = 97 to 119 mmHg) on first inflation, dropping to 74 mmHg (95% CI = 68 to 81 mmHg) and 83 mmHg (95% CI = 77 to 88 mmHg) after 20 inflations. In the pilot trial, there was no difference between the ability of Otovent and spherical balloons (χ 2 = 0.24, P = 0.89) to produce the sensation of a Valsalva manoeuvre. Otovent balloons can be used more than the 20 times quoted by the manufacturer. The two spherical balloons produced similar pressures to Otovent balloons, indicating potentially the same clinical effect. The pilot study suggests a potential use of spherical party balloons instead of Otovent balloons as a cost-efficient treatment.
Publisher: American Medical Association (AMA)
Date: 09-03-2011
Publisher: BMJ
Date: 10-2020
DOI: 10.1136/BMJOPEN-2020-037392
Abstract: When health conditions are labelled it is often to classify and communicate a set of symptoms. While diagnostic labelling can provide explanation for an in idual’s symptoms, it can also impact how in iduals and others view those symptoms. Despite existing research regarding the effects of labelling health conditions, a synthesis of these effects has not occurred. We will conduct a systematic scoping review to synthesise the reported consequences and impact of being given a label for a health condition from an in idual, societal and health practitioner perspective and explore in what context labelling of health conditions is considered important. The review will adhere to the Joanna Briggs Methodology for Scoping Reviews. Searches will be conducted in five electronic databases (PubMed, Embase, PsycINFO, Cochrane, CINAHL). Reference lists of included studies will be screened and forward and backward citation searching of included articles will be conducted. We will include reviews and original studies which describe the consequences for in iduals labelled with a non-cancer health condition. We will exclude hypothetical research designs and studies focused on the consequences of labelling cancer conditions, intellectual disabilities and/or social attributes. We will conduct thematic analyses for qualitative data and descriptive or meta-analyses for quantitative data where appropriate. Ethical approval is not required for a scoping review. Results will be disseminated via publication in a peer-reviewed journal, conference presentations and lay-person summaries on various online platforms. Findings from this systematic scoping review will identify gaps in current understanding of how, when, why and for whom a diagnostic label is important and inform future research.
Publisher: Elsevier BV
Date: 06-2006
Publisher: Elsevier BV
Date: 10-2020
Publisher: Wiley
Date: 18-11-2014
DOI: 10.1002/IJC.29270
Publisher: American Medical Association (AMA)
Date: 02-2008
Abstract: To determine predictors of the development of asymptomatic middle ear effusion (MEE) in children with acute otitis media (AOM) and to assess the effect of antibiotic therapy in preventing the development of MEE in these children. A systematic literature search was performed using PubMed, EMBASE, the Cochrane databases, and the proceedings of international otitis media symposia. A trial was selected if the allocation of participants to treatment was randomized, children aged 0 to 12 years with AOM were included, the comparison was between antibiotic therapy and placebo or no (antibiotic) treatment, and MEE at 1 month was measured. Data from 5 randomized controlled trials were included in the meta-analysis of in idual patient data (1328 children aged 6 months to 12 years). We identified independent predictors of the development of asymptomatic MEE and studied whether these children benefited more from antibiotic therapy than children with a lower risk. The primary outcome was MEE (defined as a type B tympanogram) at 1 month. The overall relative risk of antibiotic therapy in preventing the development of asymptomatic MEE after 1 month was 0.9 (95% confidence interval, 0.8-1.0 P =.19). Independent predictors of the development of asymptomatic MEE were age younger than 2 years and recurrent AOM. No statistically significant interaction effects with treatment were found. Because of a marginal effect of antibiotic therapy on the development of asymptomatic MEE and the known negative effects of prescribing antibiotics, including the development of antibiotic resistance and adverse effects, we do not recommend prescribing antibiotics to prevent MEE.
Publisher: Elsevier BV
Date: 07-2019
DOI: 10.1016/J.JCLINEPI.2019.02.003
Abstract: This article describes the Grading of Recommendations Assessment, Development and Evaluation (GRADE) Working Group's framework of moving from test accuracy to patient or population-important outcomes. We focus on the common scenario when studies directly evaluating the effect of diagnostic and other tests or strategies on health outcomes are not available or are not providing the best available evidence. Using practical ex les, we explored how guideline developers and other decision makers can use information from test accuracy to develop a recommendation by linking evidence that addresses downstream consequences. Guideline panels should develop an analytic framework that summarizes the actions that follow from applying a test and the consequences. We describe GRADE's current thinking about the overall certainty of the evidence (also known as quality of the evidence or confidence in the estimates) arising from consideration of the often complex pathways that involve multiple tests and management options. Each link in the evidence can-and often does-lower the overall certainty of the evidence required to formulate recommendations and make decisions about tests. The frequency with which an outcome occurs and its importance will influence whether or not a particular step in the linked evidence is critical to decision-making. Overall certainty may be expressed by the weakest critical step in the linked evidence. The linked approach to addressing optimal testing will often require the use of decision analytic approaches. We present an ex le that involves decision modeling in a GRADE Evidence to Decision framework for cervical cancer screening. However, because resources and time of guideline developers may be limited, we describe alternative, pragmatic strategies for developing recommendations addressing test use.
Publisher: Elsevier BV
Date: 12-2022
DOI: 10.1016/J.JCLINEPI.2022.10.018
Abstract: To establish whether items included in instruments published in the last decade assessing risk of bias of randomized controlled trials (RCTs) are indeed addressing risk of bias. We searched Medline, Embase, Web of Science, and Scopus from 2010 to October 2021 for instruments assessing risk of bias of RCTs. By extracting items and summarizing their essential content, we generated an item list. Items that two reviewers agreed clearly did not address risk of bias were excluded. We included the remaining items in a survey in which 13 experts judged the issue each item is addressing: risk of bias, applicability, random error, reporting quality, or none of the above. Seventeen eligible instruments included 127 unique items. After excluding 61 items deemed as clearly not addressing risk of bias, the item classification survey included 66 items, of which the majority of respondents deemed 20 items (30.3%) as addressing risk of bias the majority deemed 11 (16.7%) as not addressing risk of bias and there proved substantial disagreement for 35 (53.0%) items. Existing risk of bias instruments frequently include items that do not address risk of bias. For many items, experts disagree on whether or not they are addressing risk of bias.
Publisher: Wiley
Date: 21-06-2021
DOI: 10.1111/HEX.13286
Abstract: Current guidelines recommend that patients attending general practice should be screened for excess weight, and provided with weight management advice. This study sought to elicit the views of people with overweight and obesity about the role of GPs in initiating conversations about weight management. Participants with a body mass index ≥25 were recruited from a region in Australia to take part in a Community Jury. Over 2 days, participants (n = 11) deliberated on two interconnected questions: ‘Should GPs initiate discussions about weight management?’ And ‘if so, when: (a) opportunistically, (b) in the context of disease prevention, (c) in the context of disease management or (d) other?’ The jury deliberations were analysed qualitatively to elicit their views and recommendations. The jury concluded GPs should be discussing weight management, but within the broader context of general health. The jury were ided about the utility of screening. Jurors felt GPs should initiate the conversation if directly relevant for disease prevention or management, otherwise GPs should provide opportunities for patients to consent to the issue being raised. The jury's verdict suggests informed people affected by overweight and obesity believe GPs should discuss weight management with their patients. GPs should feel reassured that discussions are likely to be welcomed by patients, particularly if embedded within a more holistic focus on person‐centred care. Members of the public took part in the conduct of this study as jurors, but were not involved in the design, analysis or write‐up.
Publisher: Wiley
Date: 08-2014
DOI: 10.1111/JEBM.12112
Abstract: To evaluate the quality of reporting of the risk of bias of the Indonesian medical research. Publications from PubMed and non-PubMed indexed Indonesian medical journals between January 2008 to December 2010 were assessed for risk of bias based on criterion combination from Hedges-criteria and the Oxford Center for Evidence-Based Medicine. We assessed whether the publications addressed the risk of bias adequately (quality of reporting) and whether the risk of bias criterion was fulfilled (quality of methods). The quality (both of reporting and of methods) of a study was classified as "high" if, for at least two-thirds of the criteria were adequately reported and fulfilled. It was classified as "low" when only one-third of the criteria were reported and or fulfilled. Of the 1753 publications, 29% (n = 507) were original medical research. For 21% (109/507) the quality of reporting was high for 15% (77/507) the quality of methods was high. The proportion of high quality was significantly higher among PubMed than non-PubMed, with difference between proportions: (95%CI of difference: 3 to 23). A small proportion of Indonesian studies have high quality of reporting or methods. When international reporting guidelines are endorsed and followed, the quality of future studies may improve.
Publisher: Wiley
Date: 04-1991
DOI: 10.1111/J.1365-2559.1991.TB00855.X
Abstract: Survival for melanoma patients with thick primary tumours is notoriously short. A small number of patients with tumours greater than 5.5 mm thick do, however, have protracted survival intervals. Attempts were made to account for this phenomenon by means of histological, cytometric and HLA serotyping analyses. Patients with thick lesions surviving more than 10 years were matched--by sex, age, anatomical site of primary lesion, stage of disease and, whenever possible, by initial surgical therapy--to patients dying of their disease within 5 years. This case-control study on 13 long-term survivors and 13 short-term survivors did not show that any of the following attributes of the primary lesion were useful in predicting survival: Clark's level of invasion, ulceration, mitotic rate, host inflammatory response, tumour regression, tumour necrosis, vascular invasion, satellitosis, radial or vertical growth phase, predominant cell type, histogenetic type, borders, DNA quantification and cytomorphometry. HLA serotyping of long-term survivors showed an excess of antigen DQw1 compared with the general population, although this excess was not statistically significant.
Publisher: American Medical Association (AMA)
Date: 10-03-2020
Publisher: Elsevier BV
Date: 04-2016
Publisher: Cold Spring Harbor Laboratory
Date: 10-12-2022
DOI: 10.1101/2022.12.08.519666
Abstract: Research institutions and researchers have become increasingly concerned about poor research reproducibility and replicability, and research waste more broadly. Research institutions play an important role and understanding their intervention options is important. This review aims to identify and classify possible interventions to improve research quality, reduce waste, and improve reproducibility and replicability within research-performing institutions. Taxonomy development steps: 1) use of an exemplar paper of journal-level research quality improvement interventions, 2) 2-stage search in PubMed using seed and exemplar articles, and forward and backward citation searching to identify articles evaluating or describing research quality improvement, 3) elicited draft taxonomy feedback from researchers at an open-sciences conference workshop, and 4) cycles of revisions from the research team. The search identified 11 peer-reviewed articles on relevant interventions. Overall, 93 interventions were identified from peer-review literature and researcher reporting. Interventions covered before, during, and after study conduct research stages and whole of institution. Types of intervention included: Tools, Education & Training, Incentives, Modelling & Mentoring, Review & Feedback, Expert involvement, and Policies & Procedures. Identified areas for research institutions to focus on to improve research quality and for further research includes improving incentives to implement quality research practices, evaluating current interventions, encourage no- or low-cost/high-benefit interventions, examine institution research culture, and encourage mentor-mentee relationships.
Publisher: Ovid Technologies (Wolters Kluwer Health)
Date: 11-2011
Publisher: Springer Science and Business Media LLC
Date: 12-2014
Publisher: CMA Joule Inc.
Date: 14-03-2016
DOI: 10.1503/CMAJ.150684
Publisher: Cold Spring Harbor Laboratory
Date: 11-12-2019
DOI: 10.1101/19014134
Abstract: The Cochrane Collaboration has been publishing systematic reviews in the Cochrane Database of Systematic Reviews ( CDSR ) since 1995, with the intention that these be updated periodically. To chart the long-term updating history of a cohort of Cochrane reviews and the impact on the number of included studies. The status of a cohort of Cochrane reviews updated in 2003 was assessed at three time points: 2003, 2011, and 2018. We assessed their subject scope, compiled their publication history using PubMed and CDSR , and compared them to all Cochrane reviews available in 2002 and 2017/18. Of the 1,532 Cochrane reviews available in 2002, 11.3% were updated in 2003, with 16.6% not updated between 2003 and 2011. The reviews updated in 2003 were not markedly different to other reviews available in 2002, but more were retracted or declared stable by 2011 (13.3% versus 6.3%). The 2003 update led to a major change of the conclusions of 2.8% of updated reviews (n = 177). The cohort had a median time since publication of the first full version of the review of 18 years and a median of three updates by 2018 (range 1–11). The median time to update was three years (range 0–14 years). By the end of 2018, the median time since the last update was seven years (range 0–15). The median number of included studies rose from eight in the version of the review before the 2003 update, to 10 in that update and 14 in 2018 (range 0–347). Most Cochrane reviews get updated, however they are becoming more out-of-date over time. Updates have resulted in an overall rise in the number of included studies, although they only rarely lead to major changes in conclusion.
Publisher: Elsevier BV
Date: 10-2015
Publisher: Wiley
Date: 23-02-2020
DOI: 10.1111/HEX.13036
Publisher: Royal College of General Practitioners
Date: 03-2009
Publisher: BMJ
Date: 28-11-2008
DOI: 10.1136/BMJ.A2530
Publisher: American Astronomical Society
Date: 26-06-2019
Publisher: Ovid Technologies (Wolters Kluwer Health)
Date: 14-08-2018
DOI: 10.1161/CIRCULATIONAHA.117.029901
Abstract: D-dimer, a degradation product of cross-linked fibrin, is a marker for hypercoagulability and thrombotic events. Moderately elevated levels of D-dimer are associated with the risk of venous and arterial events in patients with vascular disease. We assessed the role of D-dimer levels in predicting long-term vascular outcomes, cause-specific mortality, and new cancers in the LIPID trial (Long-Term Intervention with Pravastatin in Ischaemic Disease) in the context of other risk factors. LIPID randomized patients to placebo or pravastatin 40 mg/d 5 to 38 months after myocardial infarction or unstable angina. D-dimer levels were measured at baseline and at 1 year. Median follow-up was 6.0 years during the trial and 16 years in total. Baseline D-dimer levels for 7863 patients were grouped by quartile (≤112, 112–173, 173–273, ng/mL). Higher levels were associated with older age, female sex, history of hypertension, poor renal function, and elevated levels of B-natriuretic peptide, high-sensitivity C-reactive protein, and sensitive troponin I (each P .001). During the first 6 years, after adjustment for up to 30 additional risk factors, higher D-dimer was associated with a significantly increased risk of a major coronary event (quartile 4 versus 1: hazard ratio [HR], 1.45 95% confidence interval, 1.21–1.74), major cardiovascular disease (CVD) event (HR, 1.45 95% confidence interval, 1.23–1.71) and venous thromboembolism (HR, 4.03 95% confidence interval, 2.31–7.03 each P .001). During the 16 years overall, higher D-dimer was an independent predictor of all-cause mortality (HR, 1.59), CVD mortality (HR, 1.61), cancer mortality (HR, 1.54), and non-CVD noncancer mortality (HR, 1.57 each P .001), remaining significant for deaths resulting from each cause occurring beyond 10 years of follow-up (each P ≤0.01). Higher D-dimer also independently predicted an increase in cancer incidence (HR, 1.16 P =0.02).The D-dimer level increased the net reclassification index for all-cause mortality by 4.0 and venous thromboembolism by 13.6. D-dimer levels predict long-term risk of arterial and venous events, CVD mortality, and non-CVD noncancer mortality independent of other risk factors. D-dimer is also a significant predictor of cancer incidence and mortality. These results support an association of D-dimer with fatal events across multiple diseases and demonstrate that this link extends beyond 10 years’ follow-up.
Publisher: Springer Science and Business Media LLC
Date: 09-02-2016
Publisher: Springer Science and Business Media LLC
Date: 06-09-2016
Publisher: American Medical Association (AMA)
Date: 02-06-1999
Publisher: AMPCo
Date: 02-2002
Publisher: Wiley
Date: 07-09-2011
Publisher: American College of Physicians
Date: 03-2005
Publisher: Royal College of General Practitioners
Date: 04-06-2021
Abstract: Antibiotics are overused for non-pneumonia acute respiratory tract infections (ARTIs). To establish prevalence and explore associations of delayed and immediate antibiotic prescribing strategies of Australian early-career GPs (specialist GP vocational trainees, also known as GP registrars) for non-pneumonia ARTIs. Cross-sectional analysis of data collected between September 2016 and December 2017 from the Registrar Clinical Encounters in Training cohort (ReCEnT) study, an ongoing cohort study of GP registrars’ in-practice clinical experiences in four Australian states and territories. Multinomial logistic regression with outcome antibiotic prescribing (no prescribing, immediate prescribing, and delayed prescribing). Of 7156 new ARTI diagnoses, no antibiotics were prescribed for 4892 (68%) antibiotics were prescribed for immediate use for 1614 diagnoses (23%) and delayed antibiotics were used for 650 diagnoses (9%). Delayed prescribing was used in 22% of otitis media, 16% of sinusitis, 13% of sore throat, 11% of acute bronchitis/bronchiolitis, and 5% of upper respiratory tract infection (URTI) diagnoses. Delayed prescribing was used for 29% of all prescriptions written. Delayed prescribing and immediate prescribing were associated with markers of clinical concern. Delayed prescribing was associated with longer duration of consultation and with fewer diagnoses roblems dealt with in the consultation. Australian early-career GPs use no prescribing for ARTIs substantially more than established GPs however, except where URTIs are concerned, they still prescribe antibiotics in excess of validated benchmarks. Australian early-career GPs may use delayed prescribing more often than European established GPs, and may use it to manage diagnostic uncertainty and, possibly, conflicting influences on prescribing behaviour. The use of delayed prescribing may enable a transition to an environment of more-rational antibiotic prescribing for ARTIs.
Publisher: Springer Science and Business Media LLC
Date: 28-01-2020
DOI: 10.1186/S12961-019-0520-4
Abstract: Disproportionate regulation of health and medical research contributes to research waste. Better understanding of exemptions of research from ethics review in different jurisdictions may help to guide modification of review processes and reduce research waste. Our aim was to identify ex les of low-risk human health and medical research exempt from ethics reviews in Australia, the United Kingdom, the United States and the Netherlands. We examined documents providing national guidance on research ethics in each country, including those authored by the National Health and Medical Research Council (Australia), National Health Service (United Kingdom), the Office for Human Research Protections (United States) and the Central Committee on Research Involving Humans (the Netherlands). Ex les and types of research projects exempt from ethics reviews were identified, and similar ex les and types were grouped together. Nine categories of research were exempt from ethics reviews across the four countries these were existing data or specimen, questionnaire or survey, interview, post-marketing study, evaluation of public benefit or service programme, randomised controlled trials, research with staff in their professional role, audit and service evaluation, and other exemptions. Existing non-identifiable data and specimens were exempt in all countries. Four categories – evaluation of public benefit or service programme, randomised controlled trials, research with staff in their professional role, and audit and service evaluation – were exempted by one country each. The remaining categories were exempted by two or three countries. Ex les and types of research exempt from research ethics reviews varied considerably. Given the considerable costs and burdens on researchers and ethics committees, it would be worthwhile to develop and provide clearer guidance on exemptions, illustrated with ex les, with transparent underpinning rationales.
Publisher: Springer Science and Business Media LLC
Date: 03-04-2012
Publisher: John Wiley & Sons, Ltd
Date: 25-01-2006
Publisher: American Physical Society (APS)
Date: 08-10-2024
Publisher: American College of Physicians
Date: 05-2004
Publisher: Cold Spring Harbor Laboratory
Date: 21-01-2022
DOI: 10.1101/2022.01.17.22269450
Abstract: Recent observational studies have suggested that vaccines for the omicron variant of SARS-Cov2 may have little or no effect in preventing infection. However, the observed effects may be confounded by patient factors and preventive behaviours or vaccine-related differences in testing behaviour. To assess the potential degree of confounding, we aimed to estimate differences in testing behaviour between unvaccinated and vaccinated populations. We recruited 1,526 Australian adults for an online randomised study about COVID testing between October and November 2021, and collected self-reported vaccination status and three measures of COVID-19 testing behaviour. We examined the association between testing intentions and vaccination status in the cross-sectional baseline data of this trial. Of the 1,526 participants (mean age 31 years): 22% had a COVID-19 test in the past month and 61% ever 17% were unvaccinated, 11% were partially vaccinated (1 dose), 71% were fully vaccinated (2+ doses). Fully vaccinated participants were twice as likely (RR 2.2 95% CI 1.8 to 2.8) to report positive COVID testing intentions than those who were unvaccinated (p .001). Partially vaccinated participants had less positive intentions than those fully vaccinated (p .001) but higher intentions than those who were unvaccinated (p=.002). For all three measures vaccination predicted greater COVID testing intentions. If the unvaccinated tested at half the rate of the vaccinated, a true vaccine effectiveness of 30% could appear to be a “negative” observed vaccine effectiveness of -40%. Assessing vaccine effectiveness should use methods to account for differential testing behaviours. Test negative designs are currently the preferred option, but its assumptions should be more thoroughly examined.
Publisher: BMJ
Date: 07-2003
DOI: 10.1136/EBM.8.4.102
Publisher: Springer Science and Business Media LLC
Date: 28-10-2019
DOI: 10.1186/S13063-019-3704-X
Abstract: Poor recruitment to, and retention in, clinical trials is a source of research waste that could be reduced by more informed choices about participation. Barriers to effective recruitment and retention can be wide-ranging but relevance of the questions being addressed by trials and the outcomes that they are assessing are key for potential participants. Decisions about trial participation should be informed by general and trial-specific information and by considering broader assessments of ‘informedness’ and how they impact on both recruitment and retention. We suggest that more informed decisions about trial participation should encourage personally appropriate decisions, increase recruitment and retention, and reduce research waste and increase its value.
Publisher: American Medical Association (AMA)
Date: 11-10-2004
DOI: 10.1001/ARCHINTE.164.18.1978
Abstract: The diagnosis of heart failure is difficult, with both overdiagnosis and underdiagnosis occurring commonly in practice. Natriuretic peptides have been proposed as a possible test for assisting diagnosis. We assessed the diagnostic accuracy of brain natriuretic peptide (BNP), including a comparison with atrial natriuretic peptide (ANP). Electronic searches were conducted of MEDLINE and EMBASE from January 1994 to December 2002 and handsearches of reference lists of included studies. We included studies that assessed the diagnostic accuracy of BNP against echocardiographic or clinical criteria or that compared the diagnostic accuracy of BNP with ANP. Two reviewers assessed studies for inclusion and quality and extracted the relevant data. A meta-analysis was performed by pooling the diagnostic odds ratios for studies that used a common reference standard. Twenty studies were included. For the 8 studies (n = 4086) that measured BNP against the criterion of left ventricular ejection fraction of 40% or less (or equivalent), the pooled diagnostic odds ratio was 11.6 (95% confidence interval, 8.4-16.1). The pooled diagnostic odds ratio was greater, 30.9 (95% confidence interval, 27.0-35.4), in the 7 studies (n = 2374) that measured BNP against clinical criteria (generally a consensus view using all other clinical information). The diagnostic odds ratio was similar in studies conducted in general practice and in hospital settings. Three studies compared BNP with N-terminal-ANP, a precursor form of ANP, and pooling of the results of these studies showed BNP to be a more accurate marker of heart failure than NT-ANP. Brain natriuretic peptide is an accurate marker of heart failure. Use of a cutoff value of 15 pmol/L achieves high sensitivity, and BNP values below this exclude heart failure in patients in whom disease is suspected. As the diagnostic odds ratio for BNP is greater when assessed against clinical criteria than against left ejection fraction alone, BNP may also be detecting patients with "diastolic" heart failure.
Publisher: Springer Science and Business Media LLC
Date: 07-04-2009
Publisher: Elsevier BV
Date: 06-2018
DOI: 10.1016/J.JCLINEPI.2018.01.011
Abstract: The aim of this study was to evaluate how often the European Medicines Agency (EMA) has authorized drugs based on nonrandomized studies and whether there is an association between treatment effects and EMA preference for further testing in randomized clinical trials (RCTs). We reviewed all initial marketing authorizations in the EMA database on human medicines between 1995 and 2015 and included authorizations granted without randomized data. We extracted data on treatment effects and EMA preference for further testing in RCTs. Of 723 drugs, 51 were authorized based on nonrandomized data. These 51 drugs were licensed for 71 indications. In the 51 drug-indication pairs with no preference for further RCT testing, effect estimates were large [odds ratio (OR): 12.0 (95% confidence interval {CI}: 8.1-17.9)] compared to effect estimates in the 20 drug-indication pairs for which future RCTs were preferred [OR: 4.3 (95% CI 2.8-6.6)], with a significant difference between effects (P = 0.0005). Nonrandomized data were used for 7% of EMA drug approvals. Larger effect sizes were associated with greater likelihood of approval based on nonrandomized data alone. We did not find a clear treatment effect threshold for drug approval without RCT evidence.
Publisher: Springer Science and Business Media LLC
Date: 13-06-2017
Publisher: Elsevier BV
Date: 07-2017
Publisher: Elsevier BV
Date: 03-2020
Publisher: AMPCo
Date: 03-2001
Publisher: Public Library of Science (PLoS)
Date: 24-07-2017
Publisher: BMJ
Date: 22-06-2017
DOI: 10.1136/BMJ.J2782
Publisher: BMJ
Date: 17-08-2017
DOI: 10.1136/BMJ.J3751
Publisher: Elsevier BV
Date: 12-2017
Publisher: BMJ
Date: 2012
Publisher: Elsevier BV
Date: 08-1999
Publisher: Springer Science and Business Media LLC
Date: 25-01-2013
Publisher: Georg Thieme Verlag KG
Date: 29-01-2016
Abstract: Nur wenn Interventionsbeschreibungen vollständig veröffentlicht sind, können Kliniker und Patienten Interventionen, die sich als nützlich erwiesen haben, verlässlich umsetzen und andere Forscher die Studienergebnisse replizieren oder darauf aufbauen. Die Qualität von Interventionsbeschreibungen in wissenschaftlichen Publikationen ist bemerkenswert gering. Um die Vollständigkeit der Berichterstattung und damit die Replizierbarkeit von Interventionen zu verbessern, entwickelte eine internationale Gruppe von Experten und Interessensvertretern die Checkliste zur Interventionsbeschreibung und Replikation (TIDieR). Der Prozess beinhaltete eine Literaturrecherche zu relevanten Checklisten und wissenschaftlichen Untersuchungen, eine Delphi-Umfrage mit internationalen Experten zur Steuerung der Item-Auswahl und eine Expertenkonferenz. Die daraus resultierende 12-Item-TIDieR Checkliste (Bezeichnung, Warum, Was (Materialien), Was (Verfahren), Wer intervenierte, Wie, Wo, Wann und Wieviel, Anpassungen, Modifikationen, Wie gut (geplante Durchführungskontrolle), Wie gut (tatsächliche Durchführung)) ist eine Erweiterung des CONSORT 2010 Statements (Item 5) und des SPIRIT 2013 Statements (Item 11). Während der Fokus der Checkliste auf klinischen Studien liegt, kann die erweiterte Anleitung bei allen evaluativen Studiendesigns herangezogen werden. Dieser Artikel präsentiert die TIDieR Checkliste und Anleitung mit Erklärung und Erläuterung jedes einzelnen Items sowie Beispielen guter Berichterstattung. Die TIDieR Checkliste und Anleitung sollen das Berichten von Interventionen verbessern und Autoren eine Hilfe bieten, die Berichterstattung ihrer Interventionen zu strukturieren, Gutachtern und Herausgebern, die Beschreibungen zu beurteilen und Lesern, die Informationen zu nutzen.
Publisher: BMJ
Date: 03-2004
DOI: 10.1136/EBM.9.2.36
Publisher: BMJ
Date: 05-2001
DOI: 10.1136/EBM.6.3.82
Publisher: BMJ
Date: 09-2021
DOI: 10.1136/BMJOPEN-2020-048191
Abstract: Clinically complex patients often require multiple medications. Polypharmacy is associated with inappropriate prescriptions, which may lead to negative outcomes. Few effective tools are available to help physicians optimise patient medication. This study assesses whether an electronic medication management support system (eMMa) reduces hospitalisation and mortality and improves prescription quality/safety in patients with polypharmacy. Planned design: pragmatic, parallel cluster-randomised controlled trial general practices as randomisation unit patients as analysis unit. As practice recruitment was poor, we included additional data to our primary endpoint analysis for practices and quarters from October 2017 to March 2021. Since randomisation was performed in waves, final study design corresponds to a stepped-wedge design with open cohort and step-length of one quarter. Scope: general practices, Westphalia-Lippe (Germany), caring for BARMER health fund-covered patients. Population: patients (≥18 years) with polypharmacy (≥5 prescriptions). S le size: initially, 32 patients from each of 539 practices were required for each study arm (17 200 patients/arm), but only 688 practices were randomised after 2 years of recruitment. Design change ensures that 80% power is nonetheless achieved. Intervention: complex intervention eMMa. Follow-up: at least five quarters/cluster (practice). recruitment: practices recruited/randomised at different times after follow-up, control group practices may access eMMa. Outcomes: primary endpoint is all-cause mortality and hospitalisation secondary endpoints are number of potentially inappropriate medications, cause-specific hospitalisation preceded by high-risk prescribing and medication underuse. Statistical analysis: primary and secondary outcomes are measured quarterly at patient level. A generalised linear mixed-effect model and repeated patient measurements are used to consider patient clusters within practices. Time and intervention group are considered fixed factors variation between practices and patients is fitted as random effects. Intention-to-treat principle is used to analyse primary and key secondary endpoints. Trial approved by Ethics Commission of North-Rhine Medical Association. Results will be disseminated through workshops, peer-reviewed publications, local and international conferences. NCT03430336 . ClinicalTrials.gov ( t2/show/NCT03430336 ).
Publisher: BMJ
Date: 06-02-1993
Abstract: To study the effect of vitamin A supplementation on morbidity and mortality from infectious disease. A meta-analysis aimed at identifying and combining mortality and morbidity data from all randomised controlled trials of vitamin A. Of 20 controlled trials identified, 12 trials were randomised trials and provided "intention to treat" data: six community trials in developing countries, three in children admitted to hospital with measles, and three in very low birth weight infants. Combined results for community studies suggest a reduction of 30% (95% confidence interval 21% to 38% two tailed p < 0.0000001) in all cause mortality. Analysis of cause specific mortality showed a reduction in deaths from diarrhoeal disease (in community studies) by 39% (24% to 50% two tailed p < 0.00001) from respiratory disease (in measles studies) by 70% (15% to 90% two tailed p = 0.02) and from other causes of death (in community studies) by 34% (15% to 48% two tailed p = 0.001). Reductions in morbidity were consistent with the findings for mortality, but fewer data were available. Adequate supply of vitamin A, either through supplementation or adequate diet, has a major role in preventing morbidity and mortality in children in developing countries. In developed countries vitamin A may also have a role in those with life threatening infections such as measles and those who may have a relative deficiency, such as premature infants.
Publisher: SAGE Publications
Date: 06-1995
DOI: 10.1177/0310057X9502300309
Abstract: This study examines the feasibility of using Quality-Adjusted Life Years (QALYs) to assess patient outcome and the economic justification of treatment in an Intensive Care Unit (ICU). 248 patients were followed for three years after admission. Survival and quality of life for each patient was evaluated. Outcome for each patient was quantified in discounted Quality-Adjusted Life Years (dQALYs). The economic justification of treatment was evaluated by comparing the total and marginal cost per dQALY for this patient group with the published cost per QALY for other medical interventions. 150 patients were alive after three years. Quality of life for most longterm survivors was good. Patient outcome (QALYs) was greatest for asthma and trauma patients, and least for cardiogenic pulmonary oedema. The tentative estimated cost- effectiveness of treatment varied from AUD $297 per QALY for asthma to AUD $2323 per QALY for patients with pulmonary oedema. This compares favourably with many preventative and non-acute medical treatments. Although the methodology is developmental, the measurement of patient outcome using QALYs appears to be feasible in a general hospital ICU.
Publisher: Georg Thieme Verlag KG
Date: 04-2011
Publisher: Wiley
Date: 25-03-2014
Publisher: Springer Science and Business Media LLC
Date: 09-2001
Abstract: Patients with Type II (non-insulin-dependent) diabetes mellitus are at increased risk of macrovascular and microvascular disease, both of which are reduced by controlling raised blood pressure in hypertensive patients. Intensive glycaemic control has also been shown to reduce microvascular disease but the effects on macrovascular disease remain uncertain. This study will examine the hypotheses that lowering blood pressure with an ACE inhibitor-diuretic combination and intensively controlling gylcaemia with a sulphonylurea-based regimen in high-risk patients with Type II diabetes (both hypertensive and non-hypertensive) reduces the incidence of macrovascular and microvascular disease. The study is a 2 x 2 factorial randomised controlled trial that will include 10000 adults with Type II diabetes at high risk of vascular disease. Following 6 weeks on open label perindopril-indapamide combination, eligible patients are randomised to continued perindopril-indapamide or matching placebo, and to an intensive gliclazide MR-based glucose control regimen or usual guidelines-based therapy. Primary outcomes are, first, the composite of nonfatal stroke, non-fatal myocardial infarction or cardiovascular death and, second, the composite of new or worsening nephropathy or diabetic eye disease. The scheduled average duration of treatment and follow-up is 4.5 years. The study will be conducted in approximately 200 centres in Australasia, Asia, Europe and North America. ADVANCE is designed to provide reliable evidence on the balance of benefits and risks conferred by blood pressure lowering therapy and intensive glucose control therapy in high-risk diabetic patients, regardless of initial blood pressure or glucose concentrations.
Publisher: Wiley
Date: 03-2007
Publisher: Cambridge University Press
Date: 05-10-2014
Abstract: Decision making in health care involves consideration of a complex set of diagnostic, therapeutic and prognostic uncertainties. Medical therapies have side effects, surgical interventions may lead to complications, and diagnostic tests can produce misleading results. Furthermore, patient values and service costs must be considered. Decisions in clinical and health policy require careful weighing of risks and benefits and are commonly a trade-off of competing objectives: maximizing quality of life vs maximizing life expectancy vs minimizing the resources required. This text takes a proactive, systematic and rational approach to medical decision making. It covers decision trees, Bayesian revision, receiver operating characteristic curves, and cost-effectiveness analysis, as well as advanced topics such as Markov models, microsimulation, probabilistic sensitivity analysis and value of information analysis. It provides an essential resource for trainees and researchers involved in medical decision modelling, evidence-based medicine, clinical epidemiology, comparative effectiveness, public health, health economics, and health technology assessment.
Publisher: BMJ
Date: 02-2006
Publisher: F1000 Research Ltd
Date: 03-03-2016
DOI: 10.12688/F1000RESEARCH.8229.1
Abstract: Vitek Tracz and Rebecca Lawrence declare the current journal publishing system to be broken beyond repair. They propose that it should be replaced by immediate publication followed by transparent peer review as the starting place for more open and efficient reporting of science. While supporting this general objective, we suggest that research is needed both to understand why biomedical scientists have been slow to take up preprint options, as well as to assess the relative merits of this and other alternatives to journal publishing.
Publisher: BMJ
Date: 12-11-2018
DOI: 10.1136/BMJ.K4645
Publisher: Oxford University Press (OUP)
Date: 11-1995
DOI: 10.1093/GERONA/50A.6.M298
Abstract: The purpose of the study was to develop a classification tool predicting a requirement for nursing home care in a population of nursing home applicants. In long-term care services, the objectives of classification mechanisms will include the prevention of inappropriate nursing home admission. We studied 295 nursing home applicants residing in the Lower North Shore Area of Sydney, a high socioeconomic status area of Sydney, Australia. The predictor variables examined included: demographic data, social work assessment data, the presence of dementia and incontinence, the Barthel Index of Activities of Daily Living, and the Mini-Mental State Examination. Classification analysis using the C4.5 Program resulted in several classification trees for a decision for nursing home care with sensitivities greater than 70%. The best classification tree was one which combined the scores of the Barthel Index and the Mini-Mental State Examination. Classification trees in their simplicity of design and application have advantages over other analytical methods of classification. Classification analysis and the trees examined in this study may have future useful application in decision making for long-term care.
Publisher: BMJ
Date: 03-2020
DOI: 10.1136/BMJOPEN-2019-034962
Abstract: Patients do better in research-intense environments. The importance of research is reflected in the accreditation requirements of Australian clinical specialist colleges. The nature of college-mandated research training has not been systematically explored. We examined the intended research curricula of Australian trainee doctors described by specialist colleges, their constructive alignment and the nature of scholarly project requirements. We undertook content analysis of publicly available documents to characterise college research training curricula. We reviewed all publicly accessible information from the websites of Australian specialist colleges and their subspecialty isions. We retrieved curricula, handbooks and assessment-related documents. Fifty-eight Australian specialist colleges and their subspecialty isions. Two reviewers extracted and coded research-related activities as learning outcomes, activities or assessments, by research stage (using, participating in or leading research) and competency based on Bloom’s taxonomy (remembering, understanding, applying, analysing, evaluating, creating). We coded learning and assessment activities by type (eg, formal research training, publication) and whether it was linked to a scholarly project. Requirements related to project supervisors’ research experience were noted. Fifty-five of 58 Australian college subspecialty isions had a scholarly project requirement. Only 11 required formal research training two required an experienced research supervisor. Colleges emphasised a role for trainees in leading research in their learning outcomes and assessments, but not learning activities. Less emphasis was placed on using research, and almost no emphasis on participation. Most learning activities and assessments mapped to the ‘creating’ domain of Bloom’s taxonomy, whereas most learning outcomes mapped to the ‘evaluating’ domain. Overall, most research learning and assessment activities were related to leading a scholarly project. Australian specialist college research curricula appear to emphasise a role for trainees in leading research and producing research deliverables, but do not mandate formal research training and supervision by experienced researchers.
Publisher: Elsevier BV
Date: 05-2005
DOI: 10.1016/J.JCLINEPI.2004.09.011
Abstract: Methods to identify studies for systematic reviews of diagnostic accuracy are less well developed than for reviews of intervention studies. This study assessed (1) the sensitivity and precision of five published search strategies and (2) the reliability and accuracy of reviewers screening the results of the search strategy. We compared the results of the search filters with the studies included in two systematic reviews, and assessed the interobserver reliability of two reviewers screening the list of articles generated by a search strategy. In the first review, the search strategy published by van der Weijden had the greatest sensitivity, and in the second, four search strategies had 100% sensitivity. There was "substantial" agreement between two reviewers, but in the first review each reviewer working on their own would have missed one paper eligible for inclusion in the review. Ascertainment intersection techniques indicate that it is unlikely that further papers have been missed in the screening process. Published search strategies may miss papers for reviews of diagnostic test accuracy. Papers are not easily identified as studies of diagnostic test accuracy, and the lack of information in the abstract makes it difficult to assess the eligibility for inclusion in a systematic review.
Publisher: Wiley
Date: 02-02-2017
DOI: 10.1111/JHN.12456
Abstract: Despite the significance placed on lifestyle interventions for obesity management, most weight loss is followed by weight regain. Psychological concepts of habitual behaviour and automaticity have been suggested as plausible explanations for this overwhelming lack of long-term weight loss success. Interventions that focus on changing an in idual's behaviour are not usually successful at changing an in idual's habits because they do not incorporate the strategies required to break unhealthy habits and/or form new healthy habits. A narrative review was conducted and describes the theory behind habit formation in relation to weight regain. The review evaluated the effectiveness of using habits as tools to maintain weight loss. Three specific habit-based weight loss programmes are described: '10 Top Tips', 'Do Something Different' and 'Transforming Your Life'. Participants in these interventions achieved significant weight loss compared to a control group or other conventional interventions. Habit-based interventions show promising results in sustaining behaviour change. Weight loss maintenance may benefit from incorporating habit-focused strategies and should be investigated further.
Publisher: American College of Physicians
Date: 21-02-2012
Publisher: Elsevier BV
Date: 2003
DOI: 10.1016/S0009-9120(02)00443-5
Abstract: To improve the accuracy and completeness of reporting of studies of diagnostic accuracy in order to allow readers to assess the potential for bias in the study and to evaluate its generalisability. The Standards for Reporting of Diagnostic Accuracy (STARD) steering committee searched the literature to identify publications on the appropriate conduct and reporting of diagnostic studies and extracted potential items into an extensive list. Researchers, editors, and members of professional organizations shortened this list during a two-day consensus meeting with the goal of developing a checklist and a generic flow diagram for studies of diagnostic accuracy. The search for published guidelines regarding diagnostic research yielded 33 previously published checklists, from which we extracted a list of 75 potential items. At the consensus meeting, participants shortened the list to 25 items, using evidence on bias whenever available. A prototypical flow diagram provides information about the method of patient recruitment, the order of test execution and the numbers of patients undergoing the test under evaluation, the reference standard or both. Evaluation of research depends on complete and accurate reporting. If medical journals adopt the checklist and the flow diagram, the quality of reporting of studies of diagnostic accuracy should improve to the advantage of the clinicians, researchers, reviewers, journals, and the public.
Publisher: AMPCo
Date: 04-1995
Publisher: Springer Science and Business Media LLC
Date: 12-11-2010
Publisher: BMJ
Date: 11-2003
DOI: 10.1136/EBM.8.6.190
Publisher: BMJ
Date: 11-08-2022
Publisher: Ovid Technologies (Wolters Kluwer Health)
Date: 2019
DOI: 10.1161/HYPERTENSIONAHA.118.12060
Abstract: Discontinuation of angiotensin-converting enzyme (ACE) inhibitor is recommended if patients experience ≥30% acute increase in serum creatinine after starting this therapy. However, the long-term effects of its continuation or discontinuation on major clinical outcomes after increases in serum creatinine are unclear. In the ADVANCE trial (Action in Diabetes and Vascular Disease: Preterax and Diamicron Modified Release Controlled Evaluation), 11 140 diabetes mellitus patients were randomly assigned to perindopril-indapamide or placebo after a 6-week active run-in period. The current study included 11 066 participants with 2 serum creatinine measurements recorded before and during the active run-in period (3 weeks apart). Acute increase in creatinine was determined using these 2 measurements and classified into 4 groups: increases in serum creatinine of %, 10% to 19%, 20% to 29%, and ≥30%. The primary study outcome was the composite of major macrovascular events, new or worsening nephropathy, and all-cause mortality. An acute increase in serum creatinine was associated with an elevated risk of the primary outcome ( P for trend .001). The hazard ratios were 1.11 (95% CI, 0.97–1.28) for those with an increase of 10% to 19%, 1.34 (1.07–1.66) for 20% to 29%, and 1.44 (1.15–1.81) for ≥30%, compared with %. However, there was no evidence of heterogeneity in the benefit of randomized treatment effects on the outcome across subgroups defined by acute serum creatinine increase ( P for heterogeneity=0.94). Acute increases in serum creatinine after starting perindopril-indapamide were associated with greater risks of subsequent major clinical outcomes. However, the continuation of angiotensin-converting enzyme inhibitor-based therapy reduced the long-term risk of major clinical outcomes, irrespective of acute increase in creatinine. URL: www.clinicaltrials.gov . Unique identifier: NCT00145925.
Publisher: Wiley
Date: 12-02-2010
Publisher: University Library System, University of Pittsburgh
Date: 04-2007
Publisher: BMJ
Date: 03-2019
DOI: 10.1136/BMJOPEN-2018-022457
Abstract: To quantify the risk of overdiagnosis associated with prostate cancer screening in Australia using a novel lifetime risk approach. Modelling and validation of the lifetime risk method using publicly available population data. Opportunistic screening for prostate cancer in the Australian population. Australian male population (1982–2012). Prostate-specific antigen testing for prostate cancer screening. Primary: lifetime risk of overdiagnosis in 2012 (excess lifetime cancer risk adjusted for changing competing mortality) Secondary: lifetime risk of prostate cancer diagnosis (unadjusted and adjusted for competing mortality) Excess lifetime risk of prostate cancer diagnosis (for all years subsequent to 1982). The lifetime risk of being diagnosed with prostate cancer increased from 6.1% in 1982 (1 in 17) to 19.6% in 2012 (1 in 5). Using 2012 competing mortality rates, the lifetime risk in 1982 was 11.5% (95% CI 11.0% to 12.0%). The excess lifetime risk of prostate cancer in 2012 (adjusted for changing competing mortality) was 8.2% (95% CI 7.6% to 8.7%) (1 in 13). This corresponds to 41% of prostate cancers being overdiagnosed. Our estimated rate of overdiagnosis is in agreement with estimates using other methods. This method may be used without the need to adjust for lead times. If annual (cross-sectional) data are used, then it may give valid estimates of overdiagnosis once screening has been established long enough for the benefits from the early detection of non-overdiagnosed cancer at a younger age to be realised in older age groups.
Publisher: Elsevier BV
Date: 10-2007
DOI: 10.1016/J.JCLINEPI.2007.01.018
Abstract: To determine whether in idual patient data meta-analyses (IPDMA) are used to perform subgroup analyses and to study whether the analytical methods regarding subgroup analyses differ between IPDMA and conventional meta-analyses (CMA). IPDMA were identified with a comprehensive literature search, subsequently, CMA on similar research questions were traced. Methods for studying subgroups were compared for IPDMA and CMA that were matched with respect to domain, type of treatment, and outcome measure. Of all 171 identified IPDMA and 102 CMA, 80% and 45% presented subgroup analyses, respectively. For 35 IPDMA and 37 "matched" CMA, subgroup analytic methods could be compared. The number of performed subgroup analyses did not differ between IPDMA and CMA. Both IPDMA and CMA often do not report adequate information on methods of analyses. Interaction tests were often not performed in IPDMA (69%) and in idual patient data was often not directly modelled (74%). Many IPDMA performed subgroup analyses, but overall treatment effects were more emphasized than subgroup effects. To study subgroups, a wide variety of analytical methods was used in both IPDMA and CMA. In general, the use and reporting of appropriate methods for subgroup analyses should be promoted. Recommendations for improvement of methods of analyses are provided.
Publisher: The Royal Australian College of General Practitioners
Date: 02-2022
Publisher: Cold Spring Harbor Laboratory
Date: 29-07-2020
DOI: 10.1101/2020.07.27.20163204
Abstract: Public cooperation to practice preventive health behaviours is essential to manage the transmission of infectious diseases such as COVID-19. We aimed to investigate beliefs about COVID-19 diagnosis, transmission and prevention that have the potential to impact the uptake of recommended public health strategies. An online cross-sectional survey conducted May 8 to May 11 2020. A national s le of 1500 Australian adults with representative quotas for age and gender provided by online panel provider. Proportion of participants with correct/incorrect knowledge of COVID-19 preventive behaviours and reasons for misconceptions. Of the 1802 potential participants contacted, 289 were excluded, 13 declined, and 1500 participated in the survey (response rate 83%). Most participants correctly identified “washing your hands regularly with soap and water” (92%) and “staying at least 1.5m away from others” (90%) could help prevent COVID-19. Over 40% (incorrectly) considered wearing gloves outside of the home would prevent them contracting COVID-19. Views about face masks were ided. Only 66% of participants correctly identified that “regular use of antibiotics” would not prevent COVID-19. Most participants (90%) identified “fever, fatigue and cough” as indicators of COVID-19. However, 42% of participants thought that being unable to “hold your breath for 10 seconds without coughing” was an indicator of having the virus. The most frequently reported sources of COVID-19 information were commercial television channels (56%), the Australian Broadcasting Corporation (43%), and the Australian Government COVID-19 information app (31%). Public messaging about hand hygiene and physical distancing to prevent transmission appear to have been effective. However, there are clear, identified barriers for many in iduals that have the potential to impede uptake or maintenance of these behaviours in the long-term. Currently these non-drug interventions are our only effective strategy to combat this pandemic. Ensuring ongoing adherence to is critical. The current strategies to prevent the transmission of COVID-19 are behavioural (hand hygiene, physical distancing, quarantining and testing if symptomatic) and rely on the public knowledge and subsequent practice of these strategies. Previous research has demonstrated a good level of public knowledge of COVID-19 symptoms and preventive behaviours but a wide variation in practicing the recommended behaviours. Although knowledge can facilitate behaviour change, knowledge alone is insufficient to reliably change behaviour to the widespread extent require to combat health crises. Participants reveal confusion about whether wearing masks will reduce transmission, apprehension about attending health services, and perceptions that antibiotics and alternative remedies (such as essential oils) prevent transmission. Analysis of why participants hold these beliefs revealed two dominant themes: an incomplete or inaccurate understanding of how COVID-19 is transmitted, and the belief that the behaviours were unnecessary. This study underlines the necessity to not only target public messaging at effective preventative behaviours, but enhance behaviour change by clearly explaining why each behaviour is important.
Publisher: BMJ
Date: 13-08-2012
DOI: 10.1136/BMJ.E5047
Publisher: AMPCo
Date: 10-2018
DOI: 10.5694/MJA17.01138
Publisher: Wiley
Date: 27-07-2018
Publisher: Springer Science and Business Media LLC
Date: 04-08-2011
DOI: 10.1038/JHH.2011.72
Abstract: Blood pressure (BP) screening is important to identify those at risk of cardiovascular disease, but there has been little data on the appropriate interval of screening. We aimed to evaluate the optimal interval and the best measure for BP re-screening by estimating the long-term, true change variance ('signal') and short-term, within-person variance ('noise'). Study design was a cohort study from 2005 to 2008. Target population was Japanese healthy adults not taking antihypertensive medication at baseline, in a teaching hospital. We measured annually the systolic BP (SBP) and the diastolic BP (DBP), and calculated the pulse pressure (PP) and the mean arterial pressure (MAP). A total of 15,055 in iduals (51% male) with a mean age of 49 years had annual check-ups. Short-term coefficient of variation was lowest for MAP at 5.2%, followed by SBP (5.7%) and DBP (5.8%), and highest for PP (12%). After 3 years, the 'signal' of true BP changes of only SBP and MAP equaled the 'noise' of BP measurement however, it was larger for those with higher initial BPs. SBP or MAP appears to be a better screening measure. The optimal interval should be 3 years or more, with SBP<130 mm Hg and 2 years for those with SBP ≥ 130 mm Hg.
Publisher: Springer Science and Business Media LLC
Date: 05-2011
Publisher: BMJ
Date: 04-10-1997
Publisher: Wiley
Date: 12-2009
DOI: 10.1111/J.1365-2753.2009.01243.X
Abstract: Rationale and aim The rapidly changing knowledge base of clinical practice highlights the need to keep abreast of knowledge changes that are most relevant for the practitioner. We aimed to develop a model for reflection on clinical practice that identified the key elements of medical knowledge needed for good medical practice. Method The dual theory of cognition, an integration of intuitive and analytic processes, provided the framework for the study. The design looked at the congruence between the clinical thinking process and the dual theory. A one-year study was conducted in general practice clinics in Oxfordshire, UK. Thirty-five general practitioners participated in 20-minute interviews to discuss how they worked through recently seen clinical cases. Over a one-year period 72 cases were recorded from 35 interviews. These were categorized according to emerging themes, which were manually coded and substantiated with verbatim quotations. Results There was a close fit between the dual theory and participants' clinical thinking processes. This included instant problem framing, consistent with automatic intuitive thinking, focusing on the risk and urgency of the case. Salient features accounting for these choices were recognizable. There was a second reflective phase, leading to the review of initial judgements. Conclusions The proposed model highlights the critical steps in decision making. This allows regular recalibration of knowledge that is most critical at each of these steps. In line with good practice, the model also links the crucial knowledge used in decision making, to value judgments made in relation to the patient.
Publisher: Walter de Gruyter GmbH
Date: 27-01-2003
Publisher: Annals of Family Medicine
Date: 11-2019
DOI: 10.1370/AFM.2445
Publisher: Informa UK Limited
Date: 26-02-2014
DOI: 10.3109/02770903.2014.887728
Abstract: To evaluate the effectiveness of clinical pathways (CPs) for paediatric asthma on length of hospital stay, additional visits due to asthma exacerbations, hospital cost, manpower and workload required for implementing CPs. Studies were eligible if they met the following criteria: children (≦18 years) with asthma, hospital or emergency department based, and study designs were (1) randomised controlled trial, (2) controlled clinical trial or (3) controlled before and after study. Two reviewers independently screened references, extracted data and assessed the risk of bias. We resolved disagreement by discussion between authors. Due to an insufficient number of studies and the heterogeneity of interventions and outcomes, we conducted a narrative systematic review with forest plots but did not pool results. About 3155 relevant articles were identified through a literature search, 628 were duplicates removed, 2037 were excluded based on review of titles and abstracts and 117 were excluded because they did not meet inclusion criteria. Seven studies involving 2600 participants met the inclusion criteria. Using asthma CPs may decrease the length of hospital stay however, CPs did not appear to reduce additional visits due to asthma exacerbations or reduce hospital costs. No eligible studies were found that quantified the manpower and workload for implementing CPs. Current studies suggest CPs may reduce the length of hospital stay, but insufficient evidence is available on total costs or readmissions to justify extensive uptake of asthma CPs in paediatric inpatient care. Higher quality, large randomised controlled trials are required that measure costs and a wider range of outcomes.
Publisher: Elsevier BV
Date: 10-2018
DOI: 10.1016/J.JCLINEPI.2018.06.014
Abstract: To study the statistical power of randomized clinical trials and examine developments over time. We analyzed the statistical power in 136,212 clinical trials between 1975 and 2014 extracted from meta-analyses from the Cochrane database of systematic reviews. We determined study power to detect standardized effect sizes, where power was based on the meta-analyzed effect size. Average power, effect size, and temporal patterns were examined for all meta-analyses and a subset of significant meta-analyses. The number of trials with power ≥80% was low (7%) but increased over time: from 5% in 1975-1979 to 9% in 2010-2014. In significant meta-analyses, the proportion of trials with sufficient power increased from 9% to 15% in these years (median power increased from 16% to 23%). This increase was mainly due to increasing s le sizes, while effect sizes remained stable with a median Cohen's h of 0.09 (interquartile range 0.04-0.22) and a median Cohen's d of 0.20 (0.11-0.40). This study demonstrates that sufficient power in clinical trials is still problematic, although the situation is slowly improving. Our data encourage further efforts to increase statistical power in clinical trials to guarantee rigorous and reproducible evidence-based medicine.
Publisher: Royal College of General Practitioners
Date: 09-12-2021
Abstract: Non-bullous impetigo is typically treated with antibiotics. However, the duration of symptoms without their use has not been established, which h ers informed decision making about antibiotic use. To determine the natural history of non-bullous impetigo. Systematic review. The authors searched PubMed up to January 2020, as well as reference lists of articles identified in the search. Eligible studies involved participants with impetigo in either the placebo group of randomised trials, or in single-group prognostic studies that did not use antibiotics and measured time to resolution or improvement. A modified version of a risk of bias assessment for prognostic studies was used. Outcomes were percentage of participants who had either symptom resolution, symptom improvement, or failed to improve at any timepoint. Adverse event data were also extracted. Seven randomised trials (557 placebo group participants) were identified. At about 7 days, the percentage of participants classified as resolved ranged from 13% to 74% across the studies, whereas the percentage classified as ‘failure to improve’ ranged from 16% to 41%. The rate of adverse effects was low. Incomplete reporting of some details limited assessment of risk of bias. Although some uncertainty around the natural history of non-bullous impetigo remains, symptoms resolve in some patients by about 7 days without using antibiotics, with about one-quarter of patients not improving. Immediate antibiotic use may not be mandatory, and discussions with patients should include the expected course of untreated impetigo and careful consideration of the benefits and harms of antibiotic use.
Publisher: Wiley
Date: 17-10-1996
Publisher: Elsevier BV
Date: 06-2003
DOI: 10.1016/S1076-6332(03)80086-7
Abstract: To comprehend the results of diagnostic accuracy studies, readers must understand the design, conduct, and analysis of such studies. The authors sought to develop guidelines for improving the accuracy and completeness of reporting of studies of diagnostic accuracy in order to allow readers better to assess the validity and generalizability of study results. The Standards for Reporting of Diagnostic Accuracy group steering committee searched the literature to identify publications on the appropriate conduct and reporting of diagnostic studies and to extract potential guidelines for authors and editors. An extensive list of items was prepared. Members of the steering committee then met for 2 days with other researchers, editors, methodologists, statisticians, and members of professional organizations to develop a checklist and a prototypical flowchart to guide authors and editors of studies of diagnostic accuracy. The search for published guidelines on diagnostic research yielded 33 previously published checklists, from which the group produced an initial list of 75 items. This list was honed to 25 key items by group consensus and on the basis of published research on bias. A prototypical flowchart was developed as a tool for conveying information about the method of patient recruitment, the order of test execution, and the numbers of patients undergoing the test under evaluation, the reference test, or both. Potential users reviewed the conference version of the checklist and flowchart and provided additional suggestions, which were then incorporated. Use of these carefully developed, consensus-based guidelines should enable clearer and more complete reporting of studies of diagnostic accuracy, as well as better reader understanding of the validity and generalizability of study results.
Publisher: American College of Physicians
Date: 02-12-2008
DOI: 10.7326/0003-4819-149-11-200812020-00009
Abstract: The evaluation of claims that a new diagnostic test is better than the current gold standard test is hindered by the lack of a perfect reference judge. However, this problem may be sidestepped by focusing on the clinical consequences of the decision rather than on estimation of accuracy. Consequences can be assessed by use of a "fair umpire" test that is not perfect yet can discriminate between disease and nondisease cases and is not biased in favor of 1 test. This article discusses 3 principles to aid judgments about the value of new tests. First, the consequences are best examined in cases with disagreement between the current and new tests. Second, resolving these disagreements requires a fair, but not necessarily perfect, umpire test. Finally, umpire tests include consequences, such as prognosis and response to treatment, as well as causal exposures and other test results.
Publisher: BMJ
Date: 21-12-2022
DOI: 10.1136/BMJEBM-2021-111767
Abstract: To investigate the decisional impact of an age-based chart of kidney function decline to support general practitioners (GPs) to appropriately interpret estimated glomerular filtration rate (eGFR) and identify patients with a clinically relevant kidney problem. Randomised vignette study 372 Australian GPs from August 2018 to November 2018. GPs were given two patient case scenarios: (1) an older woman with reduced but stable renal function and (2) a younger Aboriginal man with declining kidney function still in the normal range. One group was given an age-based chart of kidney function to assist their assessment of the patient (initial chart group) the second group was asked to assess the patients without the chart, and then again using the chart (delayed chart group). GPs’ assessment of the likelihood—on a Likert scale—that the patients had chronic kidney disease (CKD) according to the usual definition or a clinical problem with their kidneys. Prior to viewing the age-based chart GPs were evenly distributed as to whether they thought case 1—the older woman—had CKD or a clinically relevant kidney problem. GPs who had initial access to the chart were less likely to think that the older woman had CKD, and less likely to think she had a clinically relevant problem with her kidneys than GPs who had not viewed the chart. After subsequently viewing the chart, 14% of GPs in the delayed chart group changed their opinion, to indicate she was unlikely to have a clinically relevant problem with her kidneys. Prior to viewing the chart, the majority of GPs (66%) thought case 2—the younger man—did not have CKD, and were evenly distributed as to whether they thought he had a clinically relevant kidney problem. In contrast, GPs who had initial access to the chart were more likely to think he had CKD and the majority (72%) thought he had a clinically relevant kidney problem. After subsequently viewing the chart, 37% of GPs in the delayed chart group changed their opinion to indicate he likely had a clinically relevant problem with his kidneys. Use of the chart changed GPs interpretation of eGFR, with increased recognition of the younger male patient’s clinically relevant kidney problem, and increased numbers classifying the older female patient’s kidney function as normal for her age. This study has shown the potential of an age-based kidney function chart to reduce both overdiagnosis and underdiagnosis.
Publisher: Elsevier BV
Date: 10-2021
Publisher: Elsevier BV
Date: 07-2019
DOI: 10.1016/J.JCLINEPI.2019.03.007
Abstract: The objective of this study was to present ways to graphically represent a number needed to treat (NNT) in (network) meta-analysis (NMA). A barrier to using NNT in NMA when an odds ratio (OR) or risk ratio (RR) is used is the determination of a single control event rate (CER). We discuss approaches to calculate a CER, and illustrate six graphical methods for NNT from NMA. We illustrate the graphical approaches using an NMA of cognitive enhancers for Alzheimer's dementia. The NNT calculation using a relative effect measure, such as OR and RR, requires a CER value, but different CERs, including mean CER across studies, pooled CER in meta-analysis, and expert opinion-based CER may result in different NNTs. An NNT from NMA can be presented in a bar plot, Cates plot, or forest plot for a single outcome, and a bubble plot, scatterplot, or rank-heat plot for ≥2 outcomes. Each plot is associated with different properties and can serve different needs. Caution is needed in NNT interpretation, as considerations such as selection of effect size and CER, and CER assumption across multiple comparisons, may impact NNT and decision-making. The proposed graphs are helpful to interpret NNTs calculated from (network) meta-analyses.
Publisher: Ovid Technologies (Wolters Kluwer Health)
Date: 02-2018
Publisher: Elsevier BV
Date: 02-2008
Publisher: AMPCo
Date: 03-2012
DOI: 10.5694/MJA11.10364
Abstract: Effective clinical practice is predicated on valid and relevant clinical science - a commodity in increasingly short supply. The pre-eminent place of clinical research has become tainted by methodological shortcomings, commercial influences and neglect of the needs of patients and clinicians. Researchers need to be more proactive in evaluating clinical interventions in terms of patient-important benefit, wide applicability and comparative effectiveness, and in adopting study designs and reporting standards that ensure accurate and transparent research outputs. Funders of research need to be more supportive of applied clinical research that rigorously evaluates effectiveness of new treatments and synthesis existing knowledge into clinically useful systematic reviews. Several strategies for improving the state of the science are possible but their implementation requires collective action of all those undertaking and reporting clinical research.
Publisher: AMPCo
Date: 10-2012
DOI: 10.5694/MJA11.10365
Abstract: Published research evidence does not automatically diffuse into clinical practice but requires active processes of translation that start with clinicians' awareness of the science and end with patient adherence to the recommended care. Many barriers thwart the uptake of valid and clinically important research into practice, with cognitive, motivational and sociological factors on the part of health professionals being among the most important. Encouraging clinicians to question the level of scientific certainty underpinning clinical practice and to actively seek evidence that may better inform clinical decisions is a priority for improving health care effectiveness. Although there are effective strategies for improving translation of research into practice, implementing them requires agreement between and buy-in from professional and managerial stakeholders.
Publisher: SAGE Publications
Date: 31-10-2011
Abstract: A challenge of health technology assessment is integrating the information from different disciplines. This talk focuses on the evidence-based medicine perspective and challenges 3 assumptions of health technology assessment: assumptions about effectiveness, assumptions about coverage by health technology assessment, and assumptions about costs being immutable. Challenging these assumptions has several implications. First is the need for better evidence on effects: both low-volume, high-cost technologies and low-cost, high-volume technologies that are ineffective drains on health care systems’ resources. Second, cheap but effective technologies should be better promoted, as they can displace high-cost technologies. Finally, for effective but expensive technologies, we should work to lower the price and/or costs.
Publisher: American Physical Society (APS)
Date: 11-07-2019
Publisher: BMJ
Date: 2004
DOI: 10.1136/EBM.9.1.8
Publisher: Springer Science and Business Media LLC
Date: 25-11-2017
Publisher: BMJ
Date: 17-11-2015
Publisher: Springer Science and Business Media LLC
Date: 12-2017
Publisher: Elsevier BV
Date: 12-2011
DOI: 10.1016/J.JCLINEPI.2011.03.017
Abstract: This article deals with inconsistency of relative (rather than absolute) treatment effects in binary/dichotomous outcomes. A body of evidence is not rated up in quality if studies yield consistent results, but may be rated down in quality if inconsistent. Criteria for evaluating consistency include similarity of point estimates, extent of overlap of confidence intervals, and statistical criteria including tests of heterogeneity and I(2). To explore heterogeneity, systematic review authors should generate and test a small number of a priori hypotheses related to patients, interventions, outcomes, and methodology. When inconsistency is large and unexplained, rating down quality for inconsistency is appropriate, particularly if some studies suggest substantial benefit, and others no effect or harm (rather than only large vs. small effects). Apparent subgroup effects may be spurious. Credibility is increased if subgroup effects are based on a small number of a priori hypotheses with a specified direction subgroup comparisons come from within rather than between studies tests of interaction generate low P-values and have a biological rationale.
Publisher: BMJ
Date: 06-2006
DOI: 10.1136/EBM.11.3.69
Publisher: Royal College of General Practitioners
Date: 31-10-2019
Publisher: Cold Spring Harbor Laboratory
Date: 15-05-2020
DOI: 10.1101/2020.05.10.20097543
Abstract: The prevalence of true asymptomatic COVID-19 cases is critical to policy makers considering the effectiveness of mitigation measures against the SARS-CoV-2 pandemic. We aimed to synthesize all available research on the asymptomatic rates and transmission rates where possible. We searched PubMed, Embase, Cochrane COVID-19 trials, and Europe PMC (which covers pre-print platforms such as MedRxiv). We included primary studies reporting on asymptomatic prevalence where: (a) the s le frame includes at-risk population, and (b) there was sufficiently long follow up to identify pre-symptomatic cases. Meta-analysis used fixed effect and random effects models. We assessed risk of bias by combination of questions adapted from risk of bias tools for prevalence and diagnostic accuracy studies. We screened 2,454 articles and included 13 low risk-of-bias studies from seven countries that tested 21,708 at-risk people, of which 663 were positive and 111 were asymptomatic. Diagnosis in all studies was confirmed using a RT-PCR test. The proportion of asymptomatic cases ranged from 4% to 41%. Meta-analysis (fixed effect) found that the proportion of asymptomatic cases was 17% (95% CI: 14% - 20%) overall higher in aged care 20% (14% - 27%), and lower in non-aged care 16% (13% - 20%). Five studies provided direct evidence of forward transmission of the infection by asymptomatic cases. Overall, there was a 42% lower relative risk of asymptomatic transmission compared to symptomatic transmission (combined Relative Risk: 0.58 95% CI 0.335-0.994, p=0.047). Our estimates of the prevalence of asymptomatic COVID-19 cases and asymptomatic transmission rates are lower than many highly publicized studies, but still sufficient to warrant policy attention. Further robust epidemiological evidence is urgently needed, including in sub-populations such as children, to better understand the importance of asymptomatic cases for driving spread of the pandemic. OB is supported by NHMRC Grant APP1106452. PG is supported by NHMRC Australian Fellowship grant 1080042. KB is supported by NHMRC Investigator grant 1174523. All authors had full access to all data and agreed to final manuscript to be submitted for publication. There was no funding source for this study.
Publisher: Ovid Technologies (Wolters Kluwer Health)
Date: 10-05-2016
DOI: 10.1161/CIRCULATIONAHA.115.018580
Abstract: We aimed to assess the long-term effects of treatment with statin therapy on all-cause mortality, cause-specific mortality, and cancer incidence from extended follow-up of the Long-term Intervention with Pravastatin in Ischemic Disease (LIPID) trial. LIPID initially compared pravastatin and placebo over 6 years in 9014 patients with previous coronary heart disease. After the double-blind period, all patients were offered open-label statin therapy. Data were obtained over a further 10 years from 7721 patients, by direct contact for 2 years, by questionnaires thereafter, and from mortality and cancer registries. During extended follow-up, 85% assigned pravastatin and 84% assigned placebo took statin therapy. Patients assigned pravastatin maintained a significantly lower risk of death from coronary heart disease (relative risk [RR] 0.89 95% confidence interval [CI], 0.81−0.97 P =0.009), from cardiovascular disease (RR, 0.88 95% CI, 0.81−0.95 P =0.002), and from any cause (RR, 0.91 95% CI, 0.85−0.97 absolute risk reduction, 2.6% P =0.003).Cancer incidence was similar by original treatment group during the double-blind period (RR, 0.94 95% CI, 0.82–1.08 P =0.41), later follow-up (RR, 1.02 95% CI, 0.91–1.14 P =0.74), and overall (RR, 0.99 95% CI, 0.91–1.08 P =0.83). There were no significant differences in cancer mortality, or in the incidence of organ-specific cancers. Cancer findings were confirmed in a meta-analysis with other large statin trials with extended follow-up. In LIPID, the absolute survival benefit from 6 years of pravastatin treatment appeared to be maintained for the next 10 years, with a similar risk of death among survivors in both groups after the initial period. Treatment with statins does not influence cancer or death from noncardiovascular causes during long-term follow-up.
Publisher: Elsevier BV
Date: 12-2003
DOI: 10.1016/S0965-2299(03)00122-5
Abstract: To investigate the effectiveness of valerian for the management of chronic insomnia in general practice. Valerian versus placebo in a series of n-of-1 trials, in Queensland, Australia. Of 42 enrolled patients, 24 (57%) had sufficient data for inclusion into the n-of-1 analysis. Response to valerian was fair for 23 (96%) participants evaluating their "energy level in the previous day" but poor or modest for all 24 (100%) participants' response to "total sleep time" and for 23 (96%) participants' response to "number of night awakenings" and "morning refreshment". As a group, the proportion of treatment successes ranged from 0.35 (95% CI 0.23, 0.47) to 0.55 (95% CI 0.43, 0.67) for the six elicited outcome sleep variables. There was no significant difference in the number (P=0.06), distribution (P=1.00) or severity (P=0.46) of side effects between valerian and placebo treatments. Valerian was not shown to be appreciably better than placebo in promoting sleep or sleep-related factors for any in idual patient or for all patients as a group.
Publisher: Wiley
Date: 02-1999
DOI: 10.1046/J.1440-1754.1999.T01-1-00344.X
Abstract: The research literature provides surprisingly little evidence of benefit for initially treating acute otitis media in children with antibiotics. We show how to calculate the amount of benefit and harm from the evidence, and how this might be applied to change management practice.
Publisher: BMJ
Date: 21-01-2011
DOI: 10.1136/BMJ.D12
Abstract: To estimate the accuracy of monitoring cholesterol concentration for detecting non-adherence to lipid lowering treatment. Secondary analysis of data on cholesterol concentration in the LIPID (long term intervention with pravastatin in ischaemic disease) study by using three measures of non-adherence: discontinuation of treatment, allocation to placebo arm, less than 80% of pills taken. Randomised placebo controlled trial in Australia and New Zealand. 9014 patients with previous coronary heart disease. Pravastatin 40 mg or placebo daily. Sensitivity, specificity, area under the receiver operating characteristics (ROC) curve, post-test probability. Monitoring of cholesterol concentration had modest ability for detecting complete non-adherence. One year after the start of treatment, half (1957/3937) of the non-adherent patients and 6% (253/3944) of adherent patients had a rise in concentration of low density lipoprotein cholesterol. Accuracy was reasonable (area under the curve 0.89). Cholesterol monitoring, however, had weak ability for detecting partial non-adherence. One year after the start of treatment, 16% (34/213) of partially adherent and 4% (155/3585) of fully adherent patients had a rise in concentration of low density lipoprotein cholesterol. Accuracy was poor (area under the curve 0.65). For typical pre-test probabilities of non-adherence ranging from low (25%) to high (75%), the post-test probabilities indicate continuing uncertainty after lipid testing. A patient with no change in low density lipoprotein cholesterol concentration has a post-test probability of being completely non-adherent of between 67% and 95% and a post-test probability of being partially non-adherent of between 48% and 89%. A patient with a decrease in concentration of 1.0 mmol/L has a post-test probability of being completely non-adherent of between 7% and 40% and a post-test probability of being partially non-adherent of between 21% and 71%. Monitoring concentration of low density lipoprotein (or total) cholesterol has modest ability to detect complete non-adherence or non-persistence with pravastatin treatment and weak ability to detect partial non-adherence. Results of monitoring should be considered as no more than an adjunct to careful discussion with patients about adherence.
Publisher: BMJ
Date: 23-03-2006
Publisher: JMIR Publications Inc.
Date: 14-06-2002
DOI: 10.2196/49942
Publisher: The Royal Australian College of General Practitioners
Date: 02-2021
Publisher: CSIRO Publishing
Date: 2019
DOI: 10.1071/PYV25N3ABS
Publisher: Wiley
Date: 05-10-2016
DOI: 10.1111/HEX.12493
Publisher: Springer Science and Business Media LLC
Date: 04-11-2011
DOI: 10.1007/S00125-010-1951-1
Abstract: Fenofibrate caused an acute, sustained plasma creatinine increase in the Fenofibrate Intervention and Event Lowering in Diabetes (FIELD) and Action to Control Cardiovascular Risk in Diabetes (ACCORD) studies. We assessed fenofibrate's renal effects overall and in a FIELD washout sub-study. Type 2 diabetic patients (n = 9,795) aged 50 to 75 years were randomly assigned to fenofibrate (n = 4,895) or placebo (n = 4,900) for 5 years, after 6 weeks fenofibrate run-in. Albuminuria (urinary albumin/creatinine ratio measured at baseline, year 2 and close-out) and estimated GFR, measured four to six monthly according to the Modification of Diet in Renal Disease Study, were pre-specified endpoints. Plasma creatinine was re-measured 8 weeks after treatment cessation at close-out (washout sub-study, n = 661). Analysis was by intention-to-treat. During fenofibrate run-in, plasma creatinine increased by 10.0 μmol/l (p < 0.001), but quickly reversed on placebo assignment. It remained higher on fenofibrate than on placebo, but the chronic rise was slower (1.62 vs 1.89 μmol/l annually, p = 0.01), with less estimated GFR loss (1.19 vs 2.03 ml min(-1) 1.73 m(-2) annually, p < 0.001). After washout, estimated GFR had fallen less from baseline on fenofibrate (1.9 ml min(-1) 1.73 m(-2), p = 0.065) than on placebo (6.9 ml min(-1) 1.73 m(-2), p < 0.001), sparing 5.0 ml min(-1) 1.73 m(-2) (95% CI 2.3-7.7, p < 0.001). Greater preservation of estimated GFR with fenofibrate was observed with baseline hypertriacylglycerolaemia (n = 169 vs 491 without) alone, or combined with low HDL-cholesterol (n = 140 vs 520 without) and reductions of ≥ 0.48 mmol/l in triacylglycerol over the active run-in period (pre-randomisation) (n = 356 vs 303 without). Fenofibrate reduced urine albumin concentrations and hence albumin/creatinine ratio by 24% vs 11% (p < 0.001 mean difference 14% [95% CI 9-18] p < 0.001), with 14% less progression and 18% more albuminuria regression (p < 0.001) than in participants on placebo. End-stage renal event frequency was similar (n = 21 vs 26, p = 0.48). Fenofibrate reduced albuminuria and slowed estimated GFR loss over 5 years, despite initially and reversibly increasing plasma creatinine. Fenofibrate may delay albuminuria and GFR impairment in type 2 diabetes patients. Confirmatory studies are merited. ISRCTN64783481.
Publisher: American Physical Society (APS)
Date: 04-09-2019
Publisher: No publisher found
Date: 2012
Publisher: JMIR Publications Inc.
Date: 28-07-2021
Abstract: ental disorders are a leading cause of distress and disability worldwide. To meet patient demand, there is a need for increased access to high-quality, evidence-based mental health care. Telehealth has become well established in the treatment of illnesses, including mental health conditions. his study aims to conduct a robust evidence synthesis to assess whether there is evidence of differences between telehealth and face-to-face care for the management of less common mental and physical health conditions requiring psychotherapy. n this systematic review, we included randomized controlled trials comparing telehealth (telephone, video, or both) versus the face-to-face delivery of psychotherapy for less common mental health conditions and physical health conditions requiring psychotherapy. The psychotherapy delivered had to be comparable between the telehealth and face-to-face groups, and it had to be delivered by general practitioners, primary care nurses, or allied health staff (such as psychologists and counselors). Patient (symptom severity, overall improvement in psychological symptoms, and function), process (working alliance and client satisfaction), and financial (cost) outcomes were included. total of 12 randomized controlled trials were included, with 931 patients in aggregate therapies included cognitive behavioral and family therapies delivered in populations encompassing addiction disorders, eating disorders, childhood mental health problems, and chronic conditions. Telehealth was delivered by video in 7 trials, by telephone in 3 trials, and by both in 1 trial, and the delivery mode was unclear in 1 trial. The risk of bias for the 12 trials was low or unclear for most domains, except for the lack of the blinding of participants, owing to the nature of the comparison. There were no significant differences in symptom severity between telehealth and face-to-face therapy immediately after treatment (standardized mean difference [SMD] 0.05, 95% CI −0.17 to 0.27) or at any other follow-up time point. Similarly, there were no significant differences immediately after treatment between telehealth and face-to-face care delivery on any of the other outcomes meta-analyzed, including overall improvement (SMD 0.00, 95% CI −0.40 to 0.39), function (SMD 0.13, 95% CI −0.16 to 0.42), working alliance client (SMD 0.11, 95% CI −0.34 to 0.57), working alliance therapist (SMD −0.16, 95% CI −0.91 to 0.59), and client satisfaction (SMD 0.12, 95% CI −0.30 to 0.53), or at any other time point (3, 6, and 12 months). ith regard to effectively treating less common mental health conditions and physical conditions requiring psychological support, there is insufficient evidence of a difference between psychotherapy delivered via telehealth and the same therapy delivered face-to-face. However, there was no includable evidence in this review for some serious mental health conditions, such as schizophrenia and bipolar disorders, and further high-quality research is needed to determine whether telehealth is a viable, equivalent treatment option for these conditions.
Publisher: Wiley
Date: 20-09-2019
Publisher: F1000 Research Ltd
Date: 11-11-2019
DOI: 10.12688/F1000RESEARCH.21145.1
Abstract: Background : The impact of school holidays on influenza rates has been sparsely documented in Australia. In 2019, the early winter influenza season coincided with mid-year school breaks, enabling us the unusual opportunity to examine how influenza incidence changed during school closure dates. Methods : The weekly influenza data from five Australian state and one territory health departments for the period of week 19 (mid-May) to week 35 (early September) 2019 were compared to each state’s public school closure dates. We used segmented regression to model the weekly counts and a negative binomial distribution to account for overdispersion due to autocorrelation. The models’ goodness-of-fit was assessed by plots of observed versus expected counts, plots of residuals versus predicted values, and Pearson’s Chi-square test. The main exposure was the July two-week school vacation period, using a lag of one week. The effect is estimated as a percent change in incidence level, and in slope. We also dichotomized the change in weekly counts into decreases versus increases (or no change). The proportion of decreases were then compared for each of three periods (pre-vacation, vacation, post-vacation) using Fishers exact test. Results : School holidays were associated with significant declines in influenza incidence. The models showed acceptable goodness-of-fit. The numbers and percentages of decreases in weekly influenza counts from the previous week for all states combined were: 19 (33%) pre-vacation 11 (92%) decreases during the vacation and 19 (59%) decreases post-vacation (P=0.0002). The first decline during school holidays is seen in the school aged (5-19 years) population, with the declines in the adult and infant populations being smaller and following a week later. Conclusions : Given the significant and rapid reductions in incidence, these results have important public health implications. Closure or extension of holiday periods could be an emergency option for state governments.
Publisher: Royal College of General Practitioners
Date: 12-2007
Publisher: BMJ
Date: 2012
Publisher: Wiley
Date: 20-09-2019
Publisher: AMPCo
Date: 09-1986
Publisher: Elsevier BV
Date: 06-2020
Publisher: Elsevier BV
Date: 06-2020
Publisher: BMJ
Date: 13-05-2017
DOI: 10.1136/HEARTJNL-2017-311244
Abstract: To systematically review current evidence regarding the minimum acceptable risk reduction of a cardiovascular event that patients feel would justify daily intake of a preventive medication. We used the Web of Science to track the forward and backward citations of a set of five key articles until 15 November 2016. Studies were eligible if they quantitatively assessed the minimum acceptable benefit-in absolute values-of a cardiovascular disease preventive medication among a s le of the general population and required participants to choose if they would consider taking the medication. Of 341 studies screened, we included 22, involving a total of 17 751 participants: 6 studied prolongation of life (POL), 12 studied absolute risk reduction (ARR) and 14 studied number needed to treat (NNT) as measures of risk reduction communicated to the patients. In studies framed using POL, 39%-54% (average: 48%) of participants would consider taking a medication if it prolonged life by <8 months and 56%-73% (average: 64%) if it prolonged life by ≥8 months. In studies framed using ARR, 42%-72% (average: 54%) of participants would consider taking a medication that reduces their 5-year cardiovascular disease (CVD) risk by 30 and 46%-87% (average: 71%) with an NNT of ≤30. Many patients require a substantial risk reduction before they consider taking a daily medication worthwhile, even when the medication is described as being side effect free and costless.
Publisher: Wiley
Date: 26-02-2021
DOI: 10.1002/ASE.2049
Publisher: SAGE Publications
Date: 06-08-2015
Abstract: Background. Cardiovascular disease (CVD) prevention guidelines are generally based on the absolute risk of a CVD event, but there is increasing interest in using ‘heart age’ to motivate lifestyle change when absolute risk is low. Previous studies have not compared heart age to 5-year absolute risk, or investigated the impact of younger heart age, graphical format, and numeracy. Objective. Compare heart age versus 5-year absolute risk on psychological and behavioral outcomes. Design. 2 (heart age, absolute risk) × 3 (text only, bar graph, line graph) experiment. Setting. Online. Participants. 570 Australians aged 45–64 years, not taking CVD-related medication. Intervention. CVD risk assessment. Measurements. Intention to change lifestyle, recall, risk perception, emotional response, perceived credibility, and lifestyle behaviors after 2 weeks. Results. Most participants had lifestyle risk factors (95%) but low 5-year absolute risk (94%). Heart age did not improve lifestyle intentions and behaviors compared to absolute risk, was more often interpreted as a higher-risk category by low-risk participants (47% vs 23%), and decreased perceived credibility and positive emotional response. Overall, correct recall dropped from 65% to 24% after 2 weeks, with heart age recalled better than absolute risk at 2 weeks (32% vs 16%). These results were found across younger and older heart age results, graphical format, and numeracy. Limitations. Communicating CVD risk in a consultation rather than online may produce different results. Conclusions. There is no evidence that heart age motivates lifestyle change more than 5-year absolute risk in in iduals with low CVD risk. Five-year absolute risk may be a better way to explain CVD risk, because it is more credible, does not inflate risk perception, and is consistent with clinical guidelines that base lifestyle and medication recommendations on absolute risk.
Publisher: AMPCo
Date: 10-2013
DOI: 10.5694/MJA13.10133
Abstract: To identify factors that influence the extent to which general practitioners use absolute risk (AR) assessment in cardiovascular disease (CVD) risk assessment. Semi-structured interviews with 25 currently practising GPs from eight Divisions of General Practice in New South Wales, Australia, between October 2011 and May 2012. Data were analysed using framework analysis. The study identified five strategies that GPs use with patients in different situations, defined in terms of the extent to which AR was used and the reasons given for this: the AR-focused strategy, used when AR assessment was considered useful for the patient the AR-adjusted strategy, used to account for additional risk factors such as family history the clinical judgement strategy, used when GPs considered that their judgement took multiple risk factors into account as effectively as AR the passive disregard strategy, used when GPs lacked sufficient time, access or experience to use AR and the active disregard strategy, used when AR was considered to be inappropriate for the patient. The strategies were linked with different opportunity, capability and motivation barriers to the use of AR. This study provides an in-depth insight into the factors that influence GPs' use of AR in CVD risk assessment. The results suggest that GPs use a range of strategies in different situations, so different approaches may be required to improve the use of AR guidelines in practice.
Publisher: Oxford University Press (OUP)
Date: 26-10-2011
DOI: 10.1093/IJE/DYR031
Publisher: Public Library of Science (PLoS)
Date: 27-05-2014
Publisher: Wiley
Date: 24-11-2021
DOI: 10.1002/NAU.24839
Abstract: Biological rationale suggests that parasympathomimetics (cholinergic receptor stimulating agents) could be beneficial for patients with underactive bladder. However, no systematic review with meta‐analysis addressing potential benefits or adverse effects exists. The aim of this review was to assess the effectiveness, both benefits and harms, of using parasympathomimetics for the treatment of underactive bladder. The protocol was registered in PROSPERO, and searches undertaken in PubMed, Embase, and CENTRAL, including randomized and non‐randomized controlled trials of patients with underactive bladder, comparing parasympathomimetic to placebo, no treatment, or other pharmaceuticals. Risk ratios, odds ratios, and mean differences were calculated. Twelve trials with 3024 participants were included. There was a significant difference between parasympathomimetics and comparators (favoring parasympathomimetics) in the number of patients with urinary retention (risk ratio 0.55, 95% confidence interval [CI] 0.3–0.98, p = 0.04, low quality of evidence). There was no difference in mean postvoid volume overall (MD −41.4 ml, 95% CI −92.0 to 9.1, p = 0.11, low quality of evidence). There was a significant difference at up to 1 week post‐intervention, favoring parasympathomimetics (MD −77.5 ml, 95% CI −90.9 to −64.1, p 0.001, low quality of evidence), but no difference at 1 month post‐intervention. There was no difference in adverse events (odds ratio 1.19, 95% CI 0.62–2.28, p = 0.6, moderate quality of evidence). The evidence supporting the use of parasympathomimetics is of low quality, with relatively short follow‐up durations. Overall, it is not possible to draw clear evidence‐based conclusions from the current literature, presenting the use of parasympathomimetics for treating underactive bladder as a key area that requires future well‐controlled clinical trials.
Publisher: BMJ
Date: 10-05-2007
Publisher: AMPCo
Date: 1995
Publisher: Center for Open Science
Date: 23-01-2023
Abstract: The primary objective of this trial is to estimate the effectiveness of audit and feedback for reducing requests for 10 commonly overused combinations of pathology tests by high-requesting Australian general practitioners (GPs) compared with no intervention control. This includes requests for any combination of 2 or 3 pathology tests for Iron Studies, Thyroid Stimulating Hormone, Thyroid Function Tests, Vitamin D and Vitamin B12. A secondary objective is to evaluate which forms of audit and feedback are most effective in reducing overuse of the pathology test combinations.This 2x2x2 factorial cluster randomised controlled trial allocated clusters of general practices based on geographical location with at least one GP who was in the top 10% of requesters for 10 targeted combinations of pathology tests and for at least 2 of the in idual pathology test combinations between 1 July 2019 to 30 June 2021. Only high-requesting GPs within participating practices were included. The trial will be conducted between 12 May 2022 and 11 May 2023, with final follow-up on 11 August 2023.Eligible clusters were simultaneously randomised on 12 May 2022 to 1 of 8 different in idualised written audit and feedback interventions that varied factorially by (1) invitation to participate in CPD-accredited education (yes vs no), (2) provision of cost information on pathology test combinations (yes vs no), and (3) format of feedback (p hlet vs letter) or to a no intervention control. Participants were not blinded to allocation. The primary outcome is the overall rate of requesting of any of the displayed combinations of pathology tests by each GP per 1,000 category 1 consultations over 6 months using routinely collected Medicare Benefits Schedule data. Primary analyses will include all randomised GPs who have at least one category 1 consultation during the 12-month study period and will be conducted by statisticians blinded to group allocation.
Publisher: Elsevier BV
Date: 1991
Publisher: JMIR Publications Inc.
Date: 04-10-2019
Abstract: vidence of effectiveness of mobile health (mHealth) apps as well as their usability as non-drug interventions in primary care are emerging around the globe. his study aimed to explore the feasibility of mHealth app prescription by general practitioners (GPs) and to evaluate the effectiveness of an implementation intervention to increase app prescription. single-group, before-and-after study was conducted in Australian general practice. GPs were given prescription pads for 6 mHealth apps and reported the number of prescriptions dispensed for 4 months. After the reporting of month 2, a 2-minute video of one of the apps was randomly selected and sent to each GP. Data were collected through a prestudy questionnaire, monthly electronic reporting, and end-of-study interviews. The primary outcome was the number of app prescriptions (total, monthly, per GP, and per GP per fortnight). Secondary outcomes included confidence in prescribing apps (0-5 scale), the impact of the intervention video on subsequent prescription numbers, and acceptability of the interventions. f 40 GPs recruited, 39 commenced, and 36 completed the study. In total, 1324 app prescriptions were dispensed over 4 months. The median number of apps prescribed per GP was 30 (range 6-111 apps). The median number of apps prescribed per GP per fortnight increased from the pre-study level of 1.7 to 4.1. Confidence about prescribing apps doubled from a mean of 2 (not so confident) to 4 (very confident). App videos did not affect subsequent prescription rates substantially. Post-study interviews revealed that the intervention was highly acceptable. Health app prescription in general practice is feasible, and our implementation intervention was effective in increasing app prescription. GPs need more tailored education and training on the value of mHealth apps and knowledge of prescribable apps to be able to successfully change their prescribing habits to include apps. The future of sustainable and scalable app prescription requires a trustworthy electronic app repository of prescribable mHealth apps for GPs.
Publisher: Wiley
Date: 17-10-2012
Publisher: American College of Physicians
Date: 11-2000
Publisher: American Psychological Association (APA)
Date: 03-2015
DOI: 10.1037/HEA0000122
Abstract: Although current guidelines around the world recommend using absolute risk (AR) thresholds to decide whether cardiovascular disease (CVD) risk should be managed with lifestyle or medication, the use of AR in clinical practice is limited. The aim of this study was to explore the factors that influence general practitioner (GP) and patient decision making about CVD risk management, including the role of risk perception. Qualitative descriptive study involving semi-structured interviews with 25 GPs and 38 patients in Australia in 2011-2012. Transcribed audio-recordings were thematically coded and a Framework Analysis method was used. GPs rarely mentioned AR thresholds but were influenced by their subjective perception of the patient's risk and motivation, and their own attitudes toward prevention, including concerns about medication side effects and the efficacy of lifestyle change. Patients were influenced by in idual risk factors, their own motivation to change lifestyle, and attitudes toward medication: initially negative, but this improved if medication was more effective than lifestyle. High perceived risk led to medication being recommended by GPs and accepted by patients, but this was not necessarily based on AR. Patient perceptions of high risk also increased motivation to change lifestyle, particularly if they were resistant to the idea of taking medication. Perceived risk, motivation, and attitudes appeared to be more important than AR thresholds in this study. CVD risk management guidelines could be more useful if they include strategies to help GPs consider patients' risk perception, motivation, and attitudes as well as evidence-based recommendations.
Publisher: Wiley
Date: 03-2001
DOI: 10.1046/J.1365-2125.2001.00347.X
Abstract: To evaluate whether a year long clinical pharmacy program involving development of professional relationships, nurse education on medication issues, and in idualized medication reviews could change drug use, mortality and morbidity in nursing home residents. A cluster randomised controlled trial, where an intervention home was matched to three control homes, was used to examine the effect of the clinical pharmacy intervention on resident outcomes. The study involved 905 residents in 13 intervention nursing homes and 2325 residents in 39 control nursing homes in south-east Queensland and north-east New South Wales, Australia. The outcome measures were: continuous drug use data from government prescription subsidy claims, cross-sectional drug use data on prescribed and administered medications, deaths and morbidity indices (hospitalization rates, adverse events and disability indices). This intervention resulted in a reduction in drug use with no change in morbidity indices or survival. Differences in nursing home characteristics, as defined by cluster analysis with SUDAAN, negated intervention-related apparent significant improvements in survival. The use of benzodiazepines, nonsteroidal anti-inflammatory drugs, laxatives, histamine H2-receptor antagonists and antacids was significantly reduced in the intervention group, whereas the use of digoxin and diuretics remained similar to controls. Overall, drug use in the intervention group was reduced by 14.8% relative to the controls, equivalent to an annual prescription saving of A64 dollars per resident (approximately 25 pound sterling). This intervention improved nursing home resident outcomes related to changes in drug use and drug-related expenditure. The continuing ergence in both drug use and survival at the end of the study suggests that the difference would have been more significant in a larger and longer study, and even more so using additional instruments specific for measuring outcomes related to changes in drug use.
Publisher: American College of Physicians
Date: 2007
Publisher: Springer Science and Business Media LLC
Date: 06-07-2009
Abstract: Randomized clinical trials (RCTs) stopped early for benefit often receive great attention and affect clinical practice, but pose interpretational challenges for clinicians, researchers, and policy makers. Because the decision to stop the trial may arise from catching the treatment effect at a random high, truncated RCTs (tRCTs) may overestimate the true treatment effect. The St udy O f Trial P olicy Of I nterim T runcation (STOPIT-1), which systematically reviewed the epidemiology and reporting quality of tRCTs, found that such trials are becoming more common, but that reporting of stopping rules and decisions were often deficient. Most importantly, treatment effects were often implausibly large and inversely related to the number of the events accrued. The aim of STOPIT-2 is to determine the magnitude and determinants of possible bias introduced by stopping RCTs early for benefit. We will use sensitive strategies to search for systematic reviews addressing the same clinical question as each of the tRCTs identified in STOPIT-1 and in a subsequent literature search. We will check all RCTs included in each systematic review to determine their similarity to the index tRCT in terms of participants, interventions, and outcome definition, and conduct new meta-analyses addressing the outcome that led to early termination of the tRCT. For each pair of tRCT and systematic review of corresponding non-tRCTs we will estimate the ratio of relative risks, and hence estimate the degree of bias. We will use hierarchical multivariable regression to determine the factors associated with the magnitude of this ratio. Factors explored will include the presence and quality of a stopping rule, the methodological quality of the trials, and the number of total events that had occurred at the time of truncation. Finally, we will evaluate whether Bayesian methods using conservative informative priors to "regress to the mean" overoptimistic tRCTs can correct observed biases. A better understanding of the extent to which tRCTs exaggerate treatment effects and of the factors associated with the magnitude of this bias can optimize trial design and data monitoring charters, and may aid in the interpretation of the results from trials stopped early for benefit.
Publisher: Elsevier BV
Date: 11-2020
Publisher: BMJ
Date: 12-2022
DOI: 10.1136/BMJOPEN-2022-066564
Abstract: Reporting guidelines can improve dissemination and application of findings and help avoid research waste. Recent studies reveal opportunities to improve primary care (PC) reporting. Despite increasing numbers of guidelines, none exists for PC research. This study aims to prioritise candidate reporting items to inform a reporting guideline for PC research. Delphi study conducted by the Consensus Reporting Items for Studies in Primary Care (CRISP) Working Group. International online survey. Interdisciplinary PC researchers and research users. We drew potential reporting items from literature review and a series of international, interdisciplinary surveys. Using an anonymous, online survey, we asked participants to vote on and whether each candidate item should be included, required or recommended in a PC research reporting guideline. Items advanced to the next Delphi round if they received % votes to include. Analysis used descriptive statistics plus synthesis of free-text responses. 98/116 respondents completed round 1 (84% response rate) and 89/98 completed round 2 (91%). Respondents included a variety of healthcare professions, research roles, levels of experience and all five world regions. Round 1 presented 29 potential items, and 25 moved into round 2 after rewording and combining items and adding 2 new items. A majority of round 2 respondents voted to include 23 items (90%–100% for 11 items, 80%–89% for 3 items, 70%–79% for 3 items, 60%–69% for 3 items and 50%–59% for 3 items). Our Delphi study identified items to guide the reporting of PC research that has broad endorsement from the community of producers and users of PC research. We will now use these results to inform the final development of the CRISP guidance for reporting PC research.
Publisher: BMJ
Date: 15-02-2007
Publisher: Ovid Technologies (Wolters Kluwer Health)
Date: 05-2002
DOI: 10.1097/00045391-200205000-00006
Abstract: Pain is a common problem, but unfortunately, it is one that is still notoriously neglected and poorly managed. Although it usually is not rated highly in public health statistics, it forms a substantial proportion of the everyday work of health care professionals, and thus remains a major public health burden. The first challenge in successful pain management is overcoming the ineffective learning processes most health care practitioners use to update their procedures and therapies in response to the latest research. The ready availability of over-the-counter analgesics means that much of the pain in the community is now self-medicated, and it is vital that they also have ready access to the latest evidence-based recommendations. Second, better methods are needed to tailor treatment to in idual patients because differences in comorbidities, drug metabolism, or the nature and severity of disease processes lead to different responses from in idual patients. Such tailoring should also account for differences in side-effect profiles of the various treatment options available. Finally, even if health practitioners are aware of the latest in clinical evidence and recommended practices, they may not be able to implement the most appropriate treatment because of legal or financial barriers. This article will to review these three challenges to the management of pain and discuss practical ways in which they may be handled to help reduce the burden of pain care in society.
Publisher: John Wiley & Sons, Ltd
Date: 18-10-2006
Publisher: SAGE Publications
Date: 21-08-2012
Publisher: American Astronomical Society
Date: 07-04-2017
Publisher: Elsevier BV
Date: 03-2000
Publisher: BMJ
Date: 20-07-2017
DOI: 10.1136/BMJ.J2998
Publisher: BMJ
Date: 10-2020
DOI: 10.1136/BMJOPEN-2020-039747
Abstract: Polypharmacy interventions are resource-intensive and should be targeted to those at risk of negative health outcomes. Our aim was to develop and internally validate prognostic models to predict health-related quality of life (HRQoL) and the combined outcome of falls, hospitalisation, institutionalisation and nursing care needs, in older patients with multimorbidity and polypharmacy in general practices. Design : two independent data sets, one comprising health insurance claims data (n=592 456), the other data from the PRIoritising MUltimedication in Multimorbidity (PRIMUM) cluster randomised controlled trial (n=502). Population : ≥60 years, ≥5 drugs, ≥3 chronic diseases, excluding dementia. Outcomes : combined outcome of falls, hospitalisation, institutionalisation and nursing care needs (after 6, 9 and 24 months) (claims data) and HRQoL (after 6 and 9 months) (trial data). Predictor variables in both data sets : age, sex, morbidity-related variables (disease count), medication-related variables (European Union-Potentially Inappropriate Medication list (EU-PIM list)) and health service utilisation. Predictor variables exclusively in trial data : additional socio-demographics, morbidity-related variables (Cumulative Illness Rating Scale, depression), Medication Appropriateness Index (MAI), lifestyle, functional status and HRQoL (EuroQol EQ-5D-3L). Analysis : mixed regression models, combined with stepwise variable selection, 10-fold cross validation and sensitivity analyses. Most important predictors of EQ-5D-3L at 6 months in best model (Nagelkerke’s R² 0.507) were depressive symptoms (−2.73 (95% CI: −3.56 to −1.91)), MAI (−0.39 (95% CI: −0.7 to −0.08)), baseline EQ-5D-3L (0.55 (95% CI: 0.47 to 0.64)). Models based on claims data and those predicting long-term outcomes based on both data sets produced low R² values. In claims data-based model with highest explanatory power (R²=0.16), previous falls/fall-related injuries, previous hospitalisations, age, number of involved physicians and disease count were most important predictor variables. Best trial data-based model predicted HRQoL after 6 months well and included parameters of well-being not found in claims. Performance of claims data-based models and models predicting long-term outcomes was relatively weak. For generalisability, future studies should refit models by considering parameters representing well-being and functional status.
Publisher: American Medical Association (AMA)
Date: 11-2022
Publisher: American Medical Association (AMA)
Date: 22-06-2018
DOI: 10.1001/JAMANETWORKOPEN.2018.0281
Abstract: Evidence-based practice (EBP) is necessary for improving the quality of health care as well as patient outcomes. Evidence-based practice is commonly integrated into the curricula of undergraduate, postgraduate, and continuing professional development health programs. There is, however, inconsistency in the curriculum content of EBP teaching and learning programs. A standardized set of minimum core competencies in EBP that health professionals should meet has the potential to standardize and improve education in EBP. To develop a consensus set of core competencies for health professionals in EBP. For this modified Delphi survey study, a set of EBP core competencies that should be covered in EBP teaching and learning programs was developed in 4 stages: (1) generation of an initial set of relevant EBP competencies derived from a systematic review of EBP education studies for health professionals (2) a 2-round, web-based Delphi survey of health professionals, selected using purposive s ling, to prioritize and gain consensus on the most essential EBP core competencies (3) consensus meetings, both face-to-face and via video conference, to finalize the consensus on the most essential core competencies and (4) feedback and endorsement from EBP experts. From an earlier systematic review of 83 EBP educational intervention studies, 86 unique EBP competencies were identified. In a Delphi survey of 234 participants representing a range of health professionals (physicians, nurses, and allied health professionals) who registered interest (88 [61.1%] women mean [SD] age, 45.2 [10.2] years), 184 (78.6%) participated in round 1 and 144 (61.5%) in round 2. Consensus was reached on 68 EBP core competencies. The final set of EBP core competencies were grouped into the main EBP domains. For each key competency, a description of the level of detail or delivery was identified. A consensus-based, contemporary set of EBP core competencies has been identified that may inform curriculum development of entry-level EBP teaching and learning programs for health professionals and benchmark standards for EBP teaching.
Publisher: SAGE Publications
Date: 11-2014
DOI: 10.1111/IJS.12374_25
Publisher: Public Library of Science (PLoS)
Date: 13-08-2013
Publisher: BMJ
Date: 17-05-2012
DOI: 10.1136/BMJ.E3223
Publisher: Elsevier BV
Date: 07-2009
Publisher: American Astronomical Society
Date: 10-2020
Abstract: We present a search for continuous gravitational waves from five radio pulsars, comprising three recycled pulsars (PSR J0437−4715, PSR J0711−6830, and PSR J0737−3039A) and two young pulsars: the Crab pulsar (J0534+2200) and the Vela pulsar (J0835−4510). We use data from the third observing run of Advanced LIGO and Virgo combined with data from their first and second observing runs. For the first time, we are able to match (for PSR J0437−4715) or surpass (for PSR J0711−6830) the indirect limits on gravitational-wave emission from recycled pulsars inferred from their observed spin-downs, and constrain their equatorial ellipticities to be less than 10 −8 . For each of the five pulsars, we perform targeted searches that assume a tight coupling between the gravitational-wave and electromagnetic signal phase evolution. We also present constraints on PSR J0711−6830, the Crab pulsar, and the Vela pulsar from a search that relaxes this assumption, allowing the gravitational-wave signal to vary from the electromagnetic expectation within a narrow band of frequencies and frequency derivatives.
Publisher: Oxford University Press (OUP)
Date: 29-06-2011
Abstract: Research evidence is insufficient to change physicians' behaviour. In 1996, Pathman developed a four step model: that physicians need to be aware of, agree with, adopt, and adhere to guidelines. To review evidence in different settings on the patterns of ‘leakage’ in the utilisation of clinical guidelines using Pathman’s awareness-to-adherence model. A systematic review was conducted in June 2010. Primary studies were included if they reported on rates of awareness and agreement and adoption and/or adherence. 11 primary studies were identified, reporting on 29 recommendations. Descriptive analyses of patterns and causes of leakage were tabulated and graphed. Leakage was progressive across all four steps. Median adherence from all recommendations was 34%, suggesting that potential benefits for patients from health research may be lost. There was considerable variation across different types of guidelines. Recommendations for drug interventions, vaccination and health promotion activities showed high rates of awareness. Leakage was most pronounced between adoption and adherence for drug recommendations and between awareness and agreement for medical management recommendations. Barriers were reported differentially for all steps of the model. Leakage from research publication to guideline utilisation occurs in a wide variety of clinical settings and at all steps of the awareness-to-adherence pathway. This review confirms that clinical guidelines are insufficient to implement research and suggests there may be different factors influencing clinicians at each step of this pathway. Recommendations to improve guideline adherence need to be tailored to each step.
Publisher: Wiley
Date: 03-11-2014
DOI: 10.1111/ANS.12902
Abstract: The volume of orthopaedic literature is increasing exponentially, becoming more widely scattered among journals. The rate of increase in orthopaedics is greater than other specialties. We aimed to identify the number of different journals an orthopaedic surgeon would need to read to stay up-to-date with current evidence. We searched PubMed for all orthopaedic-related systematic reviews (SR) and randomized controlled trials (RCT) published in 2011 using MESH (Medical Subject Headings) terms. The search was based on the Australian Orthopaedic Association syllabus of March 2011. The results of the search were exported to EndNote, then Microsoft Excel. We then calculated the least number of journals needed to read 25%, 50% and 100% of the articles. This was done separately for SRs and RCTs. We found 1400 orthopaedic RCTs spread over 392 journals. Ten journals contained 25% of the articles, 36 journals contained 50% and 114 journals contained 75%. Three hundred journals contained three or fewer RCTs. We found 354 orthopaedic-relevant SRs spread over 152 journals. Six journals contained 25% of the articles, 23 journals contained 50% and 63 journals contained 75%. Ninety-three journals contained only one SR. Our results demonstrate the vast scatter of orthopaedic research. Four orthopaedic RCTs are published every day. To read even 25% of the new RCTs and SRs published in orthopaedics, a surgeon would require a subscription to 13 different journals monthly, a costly and time-consuming endeavour.
Publisher: American College of Physicians
Date: 19-05-2009
Publisher: BMJ
Date: 09-08-2016
DOI: 10.1136/BMJ.I4098
Abstract: To systematically review studies quantifying the associations of long term (clinic), mid-term (home), and short term (ambulatory) variability in blood pressure, independent of mean blood pressure, with cardiovascular disease events and mortality. Medline, Embase, Cinahl, and Web of Science, searched to 15 February 2016 for full text articles in English. Prospective cohort studies or clinical trials in adults, except those in patients receiving haemodialysis, where the condition may directly impact blood pressure variability. Standardised hazard ratios were extracted and, if there was little risk of confounding, combined using random effects meta-analysis in main analyses. Outcomes included all cause and cardiovascular disease mortality and cardiovascular disease events. Measures of variability included standard deviation, coefficient of variation, variation independent of mean, and average real variability, but not night dipping or day-night variation. 41 papers representing 19 observational cohort studies and 17 clinical trial cohorts, comprising 46 separate analyses were identified. Long term variability in blood pressure was studied in 24 papers, mid-term in four, and short-term in 15 (two studied both long term and short term variability). Results from 23 analyses were excluded from main analyses owing to high risks of confounding. Increased long term variability in systolic blood pressure was associated with risk of all cause mortality (hazard ratio 1.15, 95% confidence interval 1.09 to 1.22), cardiovascular disease mortality (1.18, 1.09 to 1.28), cardiovascular disease events (1.18, 1.07 to 1.30), coronary heart disease (1.10, 1.04 to 1.16), and stroke (1.15, 1.04 to 1.27). Increased mid-term and short term variability in daytime systolic blood pressure were also associated with all cause mortality (1.15, 1.06 to 1.26 and 1.10, 1.04 to 1.16, respectively). Long term variability in blood pressure is associated with cardiovascular and mortality outcomes, over and above the effect of mean blood pressure. Associations are similar in magnitude to those of cholesterol measures with cardiovascular disease. Limited data for mid-term and short term variability showed similar associations. Future work should focus on the clinical implications of assessment of variability in blood pressure and avoid the common confounding pitfalls observed to date. PROSPERO CRD42014015695.
Publisher: Public Library of Science (PLoS)
Date: 31-03-2016
Publisher: Oxford University Press (OUP)
Date: 30-11-2019
Abstract: Antibiotic prescribing for acute self-limiting respiratory tract infections (ARTIs) in Australia is higher than international benchmarks. Antibiotics have little or no efficacy in these conditions, and unnecessary use contributes to antibiotic resistance. Delayed prescribing has been shown to reduce antibiotic use. GP registrars are at a career-stage when long-term prescribing patterns are being established. To explore experiences, perceptions and attitudes of GP registrars and supervisors to delayed antibiotic prescribing for ARTIs. A qualitative study of Australian GP registrars and supervisors using a thematic analysis approach. GP registrars and supervisors were recruited across three Australian states/territories, using maximum variation s ling. Telephone interviews explored participants’ experience and perceptions of delayed prescribing of antibiotics in ARTIs. Data collection and analysis were concurrent and iterative. A total of 12 registrars and 10 supervisors were interviewed. Key themes included the use of delayed prescribing as a safety-net in cases of diagnostic uncertainty or when clinical review was logistically difficult. Delayed prescribing was viewed as a method of educating and empowering patients, and building trust and the doctor–patient relationship. Conversely, it was also seen as a loss of control over management decisions. Supervisors, more so than registrars, appreciated the psychosocial complexity of ARTI consultations and the importance of delayed antibiotic prescribing in this context. Better awareness and understanding by GP registrars of the evidence for delayed antibiotic prescription may be a means of reducing antibiotic prescribing. Understanding both registrar and supervisor usage, uncertainties and attitudes should inform educational approaches on this topic.
Publisher: BMJ
Date: 02-10-2015
DOI: 10.1136/BMJ.H5145
Publisher: BMJ
Date: 05-2007
Publisher: SAGE Publications
Date: 11-11-2021
Publisher: Oxford University Press (OUP)
Date: 05-2004
Publisher: No publisher found
Date: 2011
Publisher: BMJ
Date: 12-2008
Publisher: No publisher found
Date: 2016
DOI: 10.1002/JBMR.2847
Abstract: We aimed to compare the clinical validity and the detectability of response of short-term changes in bone mineral density (BMD hip and spine) and bone turnover markers (serum PINP and CTX) through secondary analysis of trial data. We analyzed data on 7765 women with osteoporosis randomized to 5-mg once-yearly infusions of zoledronic acid or placebo in the Health Outcomes and Reduced Incidence with Zoledronic Acid Once Yearly Pivotal Fracture Trial (HORIZON-PFT trial ran from 2002 to 2006) and the first extension trial (trial ran from 2006 to 2009). We assessed the clinical validity and detectability of response for 1-year measurements of the following monitoring tests: total hip and lumbar spine BMD, serum N-terminal propeptide of type I collagen (sPINP), and serum C-telopeptide of type I collagen (sCTX 6-month measurement used). Clinical validity was assessed by examining prediction of clinical fracture in Cox models detectability of response to treatment was assessed by the ratio of signal to noise, estimated from the distributions of change in zoledronic acid and placebo groups. Baseline measurements were available for 7683 women with hip BMD, 558 with spine BMD, 1246 with sPINP, and 517 women with sCTX. Hip BMD and sPINP ranked highly for prediction of clinical fracture, whereas sPINP and sCTX ranked highly for detectability of response to treatment. Serum PINP had the highest overall ranking. In conclusion, serum PINP is potentially useful in monitoring response to zoledronic acid. Further research is needed to evaluate the effects of monitoring PINP on treatment decisions and other clinically relevant outcomes. © 2016 American Society for Bone and Mineral Research.
Publisher: Elsevier BV
Date: 02-2008
Publisher: Wiley
Date: 06-1998
Publisher: Elsevier BV
Date: 11-2010
Publisher: Springer Science and Business Media LLC
Date: 24-07-2014
Publisher: BMJ
Date: 29-04-2009
DOI: 10.1136/BMJ.B1765
Publisher: BMJ
Date: 06-08-2009
DOI: 10.1136/BMJ.B2976
Publisher: BMJ
Date: 17-05-2011
Abstract: There are no evidence syntheses available to guide clinicians on when to titrate antihypertensive medication after initiation. To model the blood pressure (BP) response after initiating antihypertensive medication. Data sources electronic databases including Medline, Embase, Cochrane Register and reference lists up to December 2009. Trials that initiated antihypertensive medication as single therapy in hypertensive patients who were either drug naive or had a placebo washout from previous drugs. Office BP measurements at a minimum of two weekly intervals for a minimum of 4 weeks. An asymptotic approach model of BP response was assumed and non-linear mixed effects modelling used to calculate model parameters. Eighteen trials that recruited 4168 patients met inclusion criteria. The time to reach 50% of the maximum estimated BP lowering effect was 1 week (systolic 0.91 weeks, 95% CI 0.74 to 1.10 diastolic 0.95, 0.75 to 1.15). Models incorporating drug class as a source of variability did not improve fit of the data. Incorporating the presence of a titration schedule improved model fit for both systolic and diastolic pressure. Titration increased both the predicted maximum effect and the time taken to reach 50% of the maximum (systolic 1.2 vs. 0.7 weeks diastolic 1.4 vs. 0.7 weeks). Estimates of the maximum efficacy of antihypertensive agents can be made early after starting therapy. This knowledge will guide clinicians in deciding when a newly started antihypertensive agent is likely to be effective or not at controlling BP.
Publisher: Springer Science and Business Media LLC
Date: 12-2017
Publisher: Springer Science and Business Media LLC
Date: 31-07-2014
Publisher: BMJ
Date: 11-2000
DOI: 10.1136/EBM.5.6.164
Publisher: F1000 Research Ltd
Date: 26-05-2021
DOI: 10.12688/F1000RESEARCH.21145.3
Abstract: Background : The impact of school holidays on influenza rates has been sparsely documented in Australia. In 2019, the early winter influenza season coincided with mid-year school breaks, enabling us the unusual opportunity to examine how influenza incidence changed during school holiday closure dates. Methods : The weekly influenza data from five Australian state and one territory health departments for the period of week 19 (mid-May) to week 39 (early October) 2019 were compared to each state’s public-school holiday closure dates. We used segmented regression to model the weekly counts and a negative binomial distribution to account for overdispersion due to autocorrelation. The models’ goodness-of-fit was assessed by plots of observed versus expected counts, plots of residuals versus predicted values, and Pearson’s Chi-square test. The main exposure was the July two-week school holiday period, using a lag of one week. The effect is estimated as a percent change in incidence level, and in slope. Results : School holidays were associated with significant declines in influenza incidence in three states and one territory by between 41% and 65%. Two states did not show evidence of declines although one of those states had already passed its peak by the time of the school holidays. The models showed acceptable goodness-of-fit. The first decline during school holidays is seen in the school aged (5-19 years) population, with the declines in the adult and infant populations being smaller and following a week later. Conclusions : Given the significant and rapid reductions in incidence, these results have important public health implications. Closure or extension of holiday periods could be an emergency option for state governments.
Publisher: Elsevier BV
Date: 04-2011
DOI: 10.1016/J.JCLINEPI.2010.09.012
Abstract: GRADE requires a clear specification of the relevant setting, population, intervention, and comparator. It also requires specification of all important outcomes--whether evidence from research studies is, or is not, available. For a particular management question, the population, intervention, and outcome should be sufficiently similar across studies that a similar magnitude of effect is plausible. Guideline developers should specify the relative importance of the outcomes before gathering the evidence and again when evidence summaries are complete. In considering the importance of a surrogate outcome, authors should rate the importance of the patient-important outcome for which the surrogate is a substitute and subsequently rate down the quality of evidence for indirectness of outcome.
Publisher: F1000 Research Ltd
Date: 08-10-2020
DOI: 10.12688/F1000RESEARCH.21145.2
Abstract: Background : The impact of school holidays on influenza rates has been sparsely documented in Australia. In 2019, the early winter influenza season coincided with mid-year school breaks, enabling us the unusual opportunity to examine how influenza incidence changed during school holiday closure dates. Methods : The weekly influenza data from five Australian state and one territory health departments for the period of week 19 (mid-May) to week 39 (early October) 2019 were compared to each state’s public-school holiday closure dates. We used segmented regression to model the weekly counts and a negative binomial distribution to account for overdispersion due to autocorrelation. The models’ goodness-of-fit was assessed by plots of observed versus expected counts, plots of residuals versus predicted values, and Pearson’s Chi-square test. The main exposure was the July two-week school holiday period, using a lag of one week. The effect is estimated as a percent change in incidence level, and in slope. Results : School holidays were associated with significant declines in influenza incidence in three states and one territory by between 41% and 65%. Two states did not show evidence of declines although one of those states had already passed its peak by the time of the school holidays. The models showed acceptable goodness-of-fit. The first decline during school holidays is seen in the school aged (5-19 years) population, with the declines in the adult and infant populations being smaller and following a week later. Conclusions : Given the significant and rapid reductions in incidence, these results have important public health implications. Closure or extension of holiday periods could be an emergency option for state governments.
Publisher: Annals of Family Medicine
Date: 2022
DOI: 10.1370/AFM.2755
Publisher: American College of Physicians
Date: 07-01-2003
DOI: 10.7326/0003-4819-138-1-200301070-00010
Abstract: To comprehend the results of diagnostic accuracy studies, readers must understand the design, conduct, analysis, and results of such studies. That goal can be achieved only through complete transparency from authors. To improve the accuracy and completeness of reporting of studies of diagnostic accuracy in order to allow readers to assess the potential for bias in the study and to evaluate its generalizability. The Standards for Reporting of Diagnostic Accuracy (STARD) steering committee searched the literature to identify publications on the appropriate conduct and reporting of diagnostic studies and extracted potential items into an extensive list. Researchers, editors, methodologists and statisticians, and members of professional organizations shortened this list during a 2-day consensus meeting with the goal of developing a checklist and a generic flow diagram for studies of diagnostic accuracy. The search for published guidelines on diagnostic research yielded 33 previously published checklists, from which we extracted a list of 75 potential items. The consensus meeting shortened the list to 25 items, using evidence on bias whenever available. A prototypical flow diagram provides information about the method of patient recruitment, the order of test execution, and the numbers of patients undergoing the test under evaluation, the reference standard, or both. Evaluation of research depends on complete and accurate reporting. If medical journals adopt the checklist and the flow diagram, the quality of reporting of studies of diagnostic accuracy should improve to the advantage of the clinicians, researchers, reviewers, journals, and the public.
Publisher: BMJ
Date: 20-04-2009
DOI: 10.1136/BMJ.B946
Publisher: AMPCo
Date: 10-2015
DOI: 10.5694/MJA15.00164
Abstract: To elicit the views of well informed community members on the ethical obligations of general practitioners regarding prostate-specific antigen (PSA) testing, and what should be required before a man undergoes a PSA test. Three community juries held at the University of Sydney over 6 months in 2014. Forty participants from New South Wales, of erse social and cultural backgrounds and with no experience of prostate cancer, recruited through public advertising: two juries of mixed gender and ages one all-male jury of PSA screening age. In contrast to Royal Australian College of General Practitioners guidelines, the three juries concluded that GPs should initiate discussions about PSA testing with asymptomatic men over 50 years of age. The mixed juries voted for GPs offering detailed information about all potential consequent benefits and harms before PSA testing, and favoured a cooling-off period before undertaking the test. The all-male jury recommended a staggered approach to providing information. They recommended that written information be available to those who wanted it, but eight of the 12 jurors thought that doctors should discuss the benefits and harms of biopsy and treatment only after a man had received an elevated PSA test result. Informed jury participants preferred that GPs actively supported in idual men in making decisions about PSA testing, and that they allowed a cooling-off period before testing. However, men of screening age argued that uncertain and detailed information should be communicated only after receiving an elevated PSA test result.
Publisher: Ovid Technologies (Wolters Kluwer Health)
Date: 10-2010
DOI: 10.1161/HYPERTENSIONAHA.110.153817
Abstract: After starting antihypertensives, blood pressure is monitored for several reasons, including assessment of adherence. We aimed to estimate the accuracy of blood pressure monitoring for detecting early nonadherence. We conducted a secondary analysis of the Perindopril Protection Against Recurrent Stroke Study (PROGRESS), a large randomized trial of blood pressure lowering to reduce the risk of recurrent stroke. We compared change in blood pressure 3 months after randomization in people who had discontinued treatment (nonadherent) with those who stayed on treatment (adherent). We also used an indirect method, assessing whether change in blood pressure discriminated between active (adherent) and placebo (nonadherent) groups. Both methods gave similar results. For the 3433 subjects, the mean (SD) of the change in systolic blood pressure was −15.8 mm Hg (SD 18.7 mm Hg) in the adherent group and −4.2 mm Hg (SD 18.1 mm Hg) in the nonadherent group. After recalibration of the mean change in the nonadherent group to 0 mm Hg and in the adherent group to −11.6 mm Hg, the absence of a fall in systolic blood pressure at 3 months had a sensitivity of 50% and a specificity of 80% for detecting nonadherence (50% of nonadherent patients and 20% of adherent patients had a rise in blood pressure). Discriminatory power was modest over the range of cutoffs (area under the receiver–operator curve 0.67). Monitoring blood pressure is poor at detecting nonadherence to blood pressure–lowering treatment. Further research should look at other methods of assessing adherence.
Publisher: AMPCo
Date: 09-2001
Publisher: American Physical Society (APS)
Date: 04-12-2019
Publisher: AMPCo
Date: 02-2001
Publisher: SAGE Publications
Date: 26-10-2021
Publisher: American College of Physicians
Date: 03-2004
Publisher: Springer Science and Business Media LLC
Date: 17-03-2020
DOI: 10.1186/S13643-020-01296-8
Abstract: Unwanted anticholinergic effects are both underestimated and frequently overlooked. Failure to identify adverse drug reactions (ADRs) can lead to prescribing cascades and the unnecessary use of over-the-counter products. The objective of this systematic review and meta-analysis is to explore and quantify the frequency and severity of ADRs associated with amitriptyline vs. placebo in randomized controlled trials (RCTs) involving adults with any indication, as well as healthy in iduals. A systematic search in six electronic databases, forward/backward searches, manual searches, and searches for Food and Drug Administration (FDA) and European Medicines Agency (EMA) approval studies, will be performed. Placebo-controlled RCTs evaluating amitriptyline in any dosage, regardless of indication and without restrictions on the time and language of publication, will be included, as will healthy in iduals. Studies of topical amitriptyline, combination therapies, or including 100 participants, will be excluded. Two investigators will screen the studies independently, assess methodological quality, and extract data on design, population, intervention, and outcomes ((non-)anticholinergic ADRs, e.g., symptoms, test results, and adverse drug events (ADEs) such as falls). The primary outcome will be the frequency of anticholinergic ADRs as a binary outcome (absolute number of patients with/without anticholinergic ADRs) in amitriptyline vs. placebo groups. Anticholinergic ADRs will be defined by an experienced clinical pharmacologist, based on literature and data from Martindale: The Complete Drug Reference . Secondary outcomes will be frequency and severity of (non-)anticholinergic ADRs and ADEs. The information will be synthesized in meta-analyses and narratives. We intend to assess heterogeneity using meta-regression (for indication, outcome, and time points) and I 2 statistics. Binary outcomes will be expressed as odds ratios, and continuous outcomes as standardized mean differences. Effect measures will be provided using 95% confidence intervals. We plan sensitivity analyses to assess methodological quality, outcome reporting etc., and subgroup analyses on age, dosage, and duration of treatment. We will quantify the frequency of anticholinergic and other ADRs/ADEs in adults taking amitriptyline for any indication by comparing rates for amitriptyline vs. placebo, hence, preventing bias from disease symptoms and nocebo effects. As no standardized instrument exists to measure it, our overall estimate of anticholinergic ADRs may have limitations. Submitted to PROSPERO assignment is in progress.
Publisher: BMJ
Date: 13-08-2010
DOI: 10.1136/BMJ.C3852
Publisher: Informa UK Limited
Date: 04-03-2023
Publisher: Ovid Technologies (Wolters Kluwer Health)
Date: 11-2008
DOI: 10.1161/CIRCOUTCOMES.108.796185
Abstract: Background— To date, there has been no systematic examination of the relationship between international normalized ratio (INR) control measurements and the prediction of adverse events in patients with atrial fibrillation on oral anticoagulation. Methods and Results— We searched MEDLINE, EMBASE, and Cochrane through January 2008 for studies of atrial fibrillation patients receiving vitamin-K antagonists that reported INR control measures (percentage of time in therapeutic range [TTR] and percentage of INRs in range) and major hemorrhage and thromboembolic events. In total, 47 studies were included from 38 published articles. TTR ranged from 29% to 75% percentage of INRs ranged from 34% to 84%. From studies reporting both measures, TTR significantly correlated with percentage of INRs in range ( P .001). Randomized controlled trials had better INR control than retrospective studies (64.9% versus 56.4% P =0.01). TTR negatively correlated with major hemorrhage ( r =−0.59 P =0.002) and thromboembolic rates ( r =−0.59 P =0.01). This effect was significant in retrospective studies (major hemorrhage, r =−0.78 P =0.006 and thromboembolic rate, r =−0.88 P =0.03) but not in randomized controlled trials (major hemorrhage, r =0.18 P =0.33 and thromboembolic rate, r =−0.61 P =0.07). For retrospective studies, a 6.9% improvement in the TTR significantly reduced major hemorrhage by 1 event per 100 patient-years of treatment (95% CI, 0.29 to 1.71 events). Conclusions— In atrial fibrillation patients receiving orally administered anticoagulation treatment, TTR and percentage of INRs in range effectively predict INR control. Data from retrospective studies support the use of TTR to accurately predict reductions in adverse events.
Publisher: Radiological Society of North America (RSNA)
Date: 2003
DOI: 10.1148/RADIOL.2261021292
Abstract: To improve the accuracy and completeness of reporting of studies of diagnostic accuracy, to allow readers to assess the potential for bias in the study and to evaluate its generalisability. The Standards for Reporting of Diagnostic Accuracy (STARD) steering group searched the literature to identify publications on the appropriate conduct and reporting of diagnostic studies and extracted potential items into an extensive list. Researchers, editors, and members of professional organisations shortened this list during a two-day consensus meeting with the goal of developing a checklist and a generic flow diagram for studies of diagnostic accuracy. The search for published guidelines regarding diagnostic research yielded 33 previously published checklists, from which we extracted a list of 75 potential items. At the consensus meeting, participants shortened the list to a 25-item checklist, using evidence, whenever available. A prototypical flow diagram provides information about the method of patient recruitment, the order of test execution and the numbers of patients undergoing the test under evaluation, the reference standard or both. Evaluation of research depends on complete and accurate reporting. If medical journals adopt the checklist and the flow diagram, the quality of reporting of studies of diagnostic accuracy should improve to the advantage of the clinicians, researchers, reviewers, journals, and the public.
Publisher: Elsevier BV
Date: 02-2013
DOI: 10.1016/J.JCLINEPI.2012.01.006
Abstract: GRADE requires guideline developers to make an overall rating of confidence in estimates of effect (quality of evidence-high, moderate, low, or very low) for each important or critical outcome. GRADE suggests, for each outcome, the initial separate consideration of five domains of reasons for rating down the confidence in effect estimates, thereby allowing systematic review authors and guideline developers to arrive at an outcome-specific rating of confidence. Although this rating system represents discrete steps on an ordinal scale, it is helpful to view confidence in estimates as a continuum, and the final rating of confidence may differ from that suggested by separate consideration of each domain. An overall rating of confidence in estimates of effect is only relevant in settings when recommendations are being made. In general, it is based on the critical outcome that provides the lowest confidence.
Publisher: AMPCo
Date: 05-2002
Publisher: American Astronomical Society
Date: 20-04-2020
Publisher: Informa UK Limited
Date: 11-03-2015
DOI: 10.3109/02688697.2014.997670
Abstract: Deep brain stimulation (DBS) can provide dramatic essential tremor (ET) relief, however no Class I evidence exists. Analysis methods: I) traditional cohort analysis II) N-of-1 single patient randomised control trial and III) signal-to-noise (S/N) analysis. 20 DBS electrodes in ET patients were switched on and off for 3-min periods. Six pairs of on and off periods in each case, with the pair order determined randomly. Tremor severity was quantified with tremor evaluator and patient was blinded to stimulation. Patients also stated whether they perceived the stimulation to be on after each trial. I) Mean end-of-trial tremor severity 0.84 out of 10 on, 6.62 Off, t = - 13.218, p 80% tremor reduction occurred in 99/114 'On' trials (87%), and 3/114 'Off' trials (3%). S/N ratio for 80% improvement with DBS versus spontaneous improvement was 487,757-to-1. DBS treatment effect on ET is too large for bias to be a plausible explanation. Formal N-of-1 trial design, and S/N ratio method for presenting results, allows this to be demonstrated convincingly where conventional randomised controlled trials are not possible. This study is the first to provide Class I evidence for the efficacy of DBS for ET.
Publisher: Springer Science and Business Media LLC
Date: 08-10-2021
DOI: 10.1186/S13643-021-01822-2
Abstract: Purpose of this letter was to explore the trends regarding methodological flaws of systematic review and meta-analyses (SRMAs) based on retraction notes in the past decades, and the categories of reasons for the retractions. Content analysis with descriptive statistics, Cochran Q test, and multinomial logistic regression were used. Based on 187 records of retracted SRMAs, retraction announcements can be categorized into academic ethical violation, methodological flaw, and writing or reporting problem. The numbers of academic ethical violation were significantly higher than those with methodological flaw ( z = 3.51 p 0.01) or writing problem (z = 8.58 p 0.001). The numbers of methodological flaw were also higher than that with writing problem ( z = 6.47 p 0.001). Moreover, an increased proportion of methodological flaw was observed since 2006, and the retraction year was significantly associated with increased proportion of methodological flaw when academic ethical violation as the reference group.
Publisher: Public Library of Science (PLoS)
Date: 13-03-2018
Publisher: Springer Science and Business Media LLC
Date: 18-03-2010
Publisher: AMPCo
Date: 06-2014
DOI: 10.5694/MJA13.00130
Publisher: Public Library of Science (PLoS)
Date: 23-02-2010
Publisher: AMPCo
Date: 07-2001
Publisher: National Institute for Health and Care Research
Date: 08-2021
DOI: 10.3310/PGFAR09100
Abstract: Long-term monitoring is important in chronic condition management. Despite considerable costs of monitoring, there is no or poor evidence on how, what and when to monitor. The aim of this study was to improve understanding, methods, evidence base and practice of clinical monitoring in primary care, focusing on two areas: chronic kidney disease and chronic heart failure. The research questions were as follows: does the choice of test affect better care while being affordable to the NHS? Can the number of tests used to manage in iduals with early-stage kidney disease, and hence the costs, be reduced? Is it possible to monitor heart failure using a simple blood test? Can this be done using a rapid test in a general practitioner consultation? Would changes in the management of these conditions be acceptable to patients and carers? Various study designs were employed, including cohort, feasibility study, Clinical Practice Research Datalink analysis, seven systematic reviews, two qualitative studies, one cost-effectiveness analysis and one cost recommendation. This study was set in UK primary care. Data were collected from study participants and sourced from UK general practice and hospital electronic health records, and worldwide literature. The participants were NHS patients (Clinical Practice Research Datalink: 4.5 million patients), chronic kidney disease and chronic heart failure patients managed in primary care (including 750 participants in the cohort study) and primary care health professionals. The interventions were monitoring with blood and urine tests (for chronic kidney disease) and monitoring with blood tests and weight measurement (for chronic heart failure). The main outcomes were the frequency, accuracy, utility, acceptability, costs and cost-effectiveness of monitoring. Chronic kidney disease: serum creatinine testing has increased steadily since 1997, with most results being normal (83% in 2013). Increases in tests of creatinine and proteinuria correspond to their introduction as indicators in the Quality and Outcomes Framework. The Chronic Kidney Disease Epidemiology Collaboration equation had 2.7% greater accuracy (95% confidence interval 1.6% to 3.8%) than the Modification of Diet in Renal Disease equation for estimating glomerular filtration rate. Estimated annual transition rates to the next chronic kidney disease stage are ≈ 2% for people with normal urine albumin, 3–5% for people with microalbuminuria (3–30 mg/mmol) and 3–12% for people with macroalbuminuria ( 30 mg/mmol). Variability in estimated glomerular filtration rate-creatinine leads to misclassification of chronic kidney disease stage in 12–15% of tests in primary care. Glycaemic-control and lipid-modifying drugs are associated with a 6% (95% confidence interval 2% to 10%) and 4% (95% confidence interval 0% to 8%) improvement in renal function, respectively. Neither estimated glomerular filtration rate-creatinine nor estimated glomerular filtration rate-Cystatin C have utility in predicting rate of kidney function change. Patients viewed phrases such as ‘kidney damage’ or ‘kidney failure’ as frightening, and the term ‘chronic’ was misinterpreted as serious. Diagnosis of asymptomatic conditions (chronic kidney disease) was difficult to understand, and primary care professionals often did not use ‘chronic kidney disease’ when managing patients at early stages. General practitioners relied on Clinical Commissioning Group or Quality and Outcomes Framework alerts rather than National Institute for Health and Care Excellence guidance for information. Cost-effectiveness modelling did not demonstrate a tangible benefit of monitoring kidney function to guide preventative treatments, except for in iduals with an estimated glomerular filtration rate of 60–90 ml/minute/1.73 m 2 , aged 70 years and without cardiovascular disease, where monitoring every 3–4 years to guide cardiovascular prevention may be cost-effective. Chronic heart failure: natriuretic peptide-guided treatment could reduce all-cause mortality by 13% and heart failure admission by 20%. Implementing natriuretic peptide-guided treatment is likely to require predefined protocols, stringent natriuretic peptide targets, relative targets and being located in a specialist heart failure setting. Remote monitoring can reduce all-cause mortality and heart failure hospitalisation, and could improve quality of life. Diagnostic accuracy of point-of-care N-terminal prohormone of B-type natriuretic peptide (sensitivity, 0.99 specificity, 0.60) was better than point-of-care B-type natriuretic peptide (sensitivity, 0.95 specificity, 0.57). Within-person variation estimates for B-type natriuretic peptide and weight were as follows: coefficient of variation, 46% and coefficient of variation, 1.2%, respectively. Point-of-care N-terminal prohormone of B-type natriuretic peptide within-person variability over 12 months was 881 pg/ml (95% confidence interval 380 to 1382 pg/ml), whereas between-person variability was 1972 pg/ml (95% confidence interval 1525 to 2791 pg/ml). For in iduals, monitoring provided reassurance future changes, such as increased testing, would be acceptable. Point-of-care testing in general practice surgeries was perceived positively, reducing waiting time and anxiety. Community heart failure nurses had greater knowledge of National Institute for Health and Care Excellence guidance than general practitioners and practice nurses. Health-care professionals believed that the cost of natriuretic peptide tests in routine monitoring would outweigh potential benefits. The review of cost-effectiveness studies suggests that natriuretic peptide-guided treatment is cost-effective in specialist settings, but with no evidence for its value in primary care settings. No randomised controlled trial evidence was generated. The pathways to the benefit of monitoring chronic kidney disease were unclear. It is difficult to ascribe quantifiable benefits to monitoring chronic kidney disease, because monitoring is unlikely to change treatment, especially in chronic kidney disease stages G3 and G4. New approaches to monitoring chronic heart failure, such as point-of-care natriuretic peptide tests in general practice, show promise if high within-test variability can be overcome. The following future work is recommended: improve general practitioner–patient communication of early-stage renal function decline, and identify strategies to reduce the variability of natriuretic peptide. This study is registered as PROSPERO CRD42015017501, CRD42019134922 and CRD42016046902. This project was funded by the National Institute for Health Research (NIHR) Programme Grants for Applied Research programme and will be published in full in Programme Grants for Applied Research Vol. 9, No. 10. See the NIHR Journals Library website for further project information.
Publisher: Wiley
Date: 10-10-2008
Publisher: American Academy of Pediatrics (AAP)
Date: 04-2015
Abstract: Overdiagnosis and underdiagnosis of attention-deficit/hyperactivity disorder (ADHD) are widely debated, fueled by variations in prevalence estimates across countries, time, and broadening diagnostic criteria. We conducted a meta-analysis to: establish a benchmark pooled prevalence for ADHD examine whether estimates have increased with publication of different editions of the Diagnostic and Statistical Manual of Mental Disorders (DSM) and explore the effect of study features on prevalence. Medline, PsycINFO, CINAHL, Embase, and Web of Science were searched for studies with point prevalence estimates of ADHD. We included studies of children that used the diagnostic criteria from DSM-III, DSM-III-R and DSM-IV in any language. Data were extracted on s ling procedure, s le characteristics, assessors, measures, and whether full or partial criteria were met. The 175 eligible studies included 179 ADHD prevalence estimates with an overall pooled estimate of 7.2% (95% confidence interval: 6.7 to 7.8), and no statistically significant difference between DSM editions. In multivariable analyses, prevalence estimates for ADHD were lower when using the revised third edition of the DSM compared with the fourth edition (P = .03) and when studies were conducted in Europe compared with North America (P = .04). Few studies used population s ling with random selection. Most were from single towns or regions, thus limiting generalizability. Our review provides a benchmark prevalence estimate for ADHD. If population estimates of ADHD diagnoses exceed our estimate, then overdiagnosis may have occurred for some children. If fewer, then underdiagnosis may have occurred.
Publisher: BMJ
Date: 12-2014
Publisher: Springer Science and Business Media LLC
Date: 15-06-2015
Publisher: Springer Science and Business Media LLC
Date: 29-05-2014
Publisher: SAGE Publications
Date: 07-2022
DOI: 10.1177/23814683221129875
Abstract: Background. Overdiagnosis is an accepted harm of cancer screening, but studies of prostate cancer screening decision aids have not examined provision of information important in communicating the risk of overdiagnosis, including overdiagnosis frequency, competing mortality risk, and the high prevalence of indolent cancers in the population. Methods. We undertook a comprehensive review of all publicly available decision aids for prostate cancer screening, published in (or translated to) the English language, without date restrictions. We included all decision aids from a recent systematic review and screened excluded studies to identify further relevant decision aids. We used a Google search to identify further decision aids not published in peer reviewed medical literature. Two reviewers independently screened the decision aids and extracted information on communication of overdiagnosis. Disagreements were resolved through discussion or by consulting a third author. Results. Forty-one decision aids were included out of the 80 records identified through the search. Most decision aids ( n = 32, 79%) did not use the term overdiagnosis but included a description of it ( n = 38, 92%). Few ( n = 7, 17%) reported the frequency of overdiagnosis. Little more than half presented the benefits of prostate cancer screening before the harms ( n = 22, 54%) and only 16, (39%) presented information on competing risks of mortality. Only 2 ( n = 2, 5%) reported the prevalence of undiagnosed prostate cancer in the general population. Conclusion. Most patient decision aids for prostate cancer screening lacked important information on overdiagnosis. Specific guidance is needed on how to communicate the risks of overdiagnosis in decision aids, including appropriate content, terminology and graphical display. Most patient decision aids for prostate cancer screening lacks important information on overdiagnosis. Specific guidance is needed on how to communicate the risks of overdiagnosis.
Publisher: BMJ
Date: 04-2005
DOI: 10.1136/EBN.8.2.36
Publisher: Springer Science and Business Media LLC
Date: 20-02-2019
Publisher: BMJ
Date: 15-05-2008
Publisher: Royal College of General Practitioners
Date: 02-2011
Publisher: Informa UK Limited
Date: 2012
DOI: 10.1080/14739879.2012.11494085
Abstract: Much continuing medical education is known to have a limited impact on subsequent clinical behaviour. An option to improve this is to ask participants to develop specific actions about their clinical behaviour changes. We aimed to investigate the content and outcomes of GPs' action lists produced on a one-day continuing professional development (CPD) course. Actions were recorded during a one-day course, and followed up six months later. Of 1696 delegates attending the nine courses, 306 (18%) provided their action plan and 139 of these responded to the questionnaire at six months (response rate 45%). The 306 delegates recorded a total of 1443 actions (4.7 per delegate). Of these, 359 were subsequently explored by follow-up questionnaire at six months of which 147 (41% 95% CI 36%-46%) were 'successful', an average of completed actions of 1.9 per GP. Four significant facilitators and four significant barriers to success were identified. Delegates attending the one-day CPD course recorded an average of 4.7 intended practice changes, and completed 41%. Further research is needed on how to increase the number of planned and completed actions.
Publisher: Springer Science and Business Media LLC
Date: 16-09-2019
DOI: 10.1007/S10865-019-00100-W
Abstract: Habit-based interventions are a novel and emerging strategy to help reduce excess weight in in iduals with overweight or obesity. This systematic review and meta-analysis aims to determine the efficacy of habit-based interventions on weight loss. We identified potential studies through electronic searches in February 2019. Included studies were randomized/quasi randomized controlled trials comparing weight loss interventions founded on habit-theory with a control (active or non-active) and enrolled adults with overweight or obesity (body mass index ≥ 25 kg/m
Publisher: Springer Science and Business Media LLC
Date: 06-03-2019
Publisher: Springer Science and Business Media LLC
Date: 31-07-2009
DOI: 10.1007/S00125-009-1468-7
Abstract: We compared the effect of biphasic, basal or prandial insulin regimens on glucose control, clinical outcomes and adverse events in people with type 2 diabetes. We searched the Cochrane Library, MEDLINE, EMBASE and major American and European conference abstracts for randomised controlled trials up to October 2008. A systematic review and meta-analyses were performed. Twenty-two trials that randomised 4,379 patients were included. Seven trials reported both starting insulin dose and titration schedules. Hypoglycaemia definitions and glucose targets varied. Meta-analyses were performed pooling data from insulin-naive patients. Greater HbA(1c) reductions were seen with biphasic and prandial insulin, compared with basal insulin, of 0.45% (95% CI 0.19-0.70, p = 0.0006) and 0.45% (95% CI 0.16-0.73, p = 0.002), respectively, but with lesser reductions of fasting glucose of 0.93 mmol/l (95% CI 0.21-1.65, p = 0.01) and 2.20 mmol/l (95% CI 1.70-2.70, p < 0.00001), respectively. Larger insulin doses at study end were reported in biphasic and prandial arms compared with basal arms. No studies found differences in major hypoglycaemic events, but minor hypoglycaemic events for prandial and biphasic insulin were inconsistently reported as either higher than or equivalent to basal insulin. Greater weight gain was seen with prandial compared with basal insulin (1.86 kg, 95% CI 0.80-2.92, p = 0.0006). Greater HbA(1c) reduction may be obtained in type 2 diabetes when insulin is initiated using biphasic or prandial insulin rather than a basal regimen, but with an unquantified risk of hypoglycaemia. Studies with longer follow-up are required to determine the clinical relevance of this finding.
Publisher: National Institute for Health and Care Research
Date: 07-2009
DOI: 10.3310/HTA13320
Abstract: To assess the accuracy in diagnosing heart failure of clinical features and potential primary care investigations, and to perform a decision analysis to test the impact of plausible diagnostic strategies on costs and diagnostic yield in the UK health-care setting. MEDLINE and CINAHL were searched from inception to 7 July 2006. 'Grey literature' databases and conference proceedings were searched and authors of relevant studies contacted for data that could not be extracted from the published papers. A systematic review of the clinical evidence was carried out according to standard methods. In idual patient data (IPD) analysis was performed on nine studies, and a logistic regression model to predict heart failure was developed on one of the data sets and validated on the other data sets. Cost-effectiveness modelling was based on a decision tree that compared different plausible investigation strategies. Dyspnoea was the only symptom or sign with high sensitivity (89%), but it had poor specificity (51%). Clinical features with relatively high specificity included history of myocardial infarction (89%), orthopnoea (89%), oedema (72%), elevated jugular venous pressure (70%), cardiomegaly (85%), added heart sounds (99%), lung crepitations (81%) and hepatomegaly (97%). However, the sensitivity of these features was low, ranging from 11% (added heart sounds) to 53% (oedema). Electrocardiography (ECG), B-type natriuretic peptides (BNP) and N-terminal pro-B-type natriuretic peptides (NT-proBNP) all had high sensitivities (89%, 93% and 93% respectively). Chest X-ray was moderately specific (76-83%) but insensitive (67-68%). BNP was more accurate than ECG, with a relative diagnostic odds ratio of ECG/BNP of 0.32 (95% CI 0.12-0.87). There was no difference between the diagnostic accuracy of BNP and NT-proBNP. A model based upon simple clinical features and BNP derived from one data set was found to have good validity when applied to other data sets. A model substituting ECG for BNP was less predictive. From this a simple clinical rule was developed: in a patient presenting with symptoms such as breathlessness in whom heart failure is suspected, refer directly to echocardiography if the patient has a history of myocardial infarction or basal crepitations or is a male with ankle oedema otherwise, carry out a BNP test and refer for echocardiography depending on the results of the test. On the basis of the cost-effectiveness analysis carried out, such a decision rule is likely to be considered cost-effective to the NHS in terms of cost per additional case detected. The cost-effectiveness analysis further suggested that, if likely benefit to the patient in terms of improved life expectancy is taken into account, the optimum strategy would be to refer all patients with symptoms suggestive of heart failure directly for echocardiography. The analysis suggests the need for important changes to the NICE recommendations. First, BNP (or NT-proBNP) should be recommended over ECG and, second, some patients should be referred straight for echocardiography without undergoing any preliminary investigation. Future work should include evaluation of the clinical rule described above in clinical practice.
Publisher: Ovid Technologies (Wolters Kluwer Health)
Date: 2009
Publisher: Springer Science and Business Media LLC
Date: 05-08-2016
Publisher: American College of Physicians
Date: 17-01-2017
DOI: 10.7326/L16-0528
Publisher: Cold Spring Harbor Laboratory
Date: 27-04-2020
DOI: 10.1101/2020.04.22.20072371
Abstract: Many randomized controlled trials (RCTs) are biased and difficult to reproduce due to methodological flaws and poor reporting. There is increasing attention for responsible research practices including reporting guidelines, but it is unknown whether these efforts have improved RCT quality (i.e. reduced risk of bias). We therefore mapped trends over time in trial publication, trial registration, reporting according to CONSORT, and characteristics of publication and authors. Meta-information of 176,620 RCTs published between 1966 and 2018 was extracted. Risk of bias probability (four domains: random sequence generation, allocation concealment, blinding of patients ersonnel, and blinding of outcome assessment) was assessed using validated risk-of-bias machine learning tools. In addition, trial registration and reporting according to CONSORT were assessed with automated searches. Characteristics were extracted related to publication (number of authors, journal impact factor, medical discipline) and authors (gender and Hirsch-index). The annual number of published RCTs substantially increased over four decades, accompanied by increases in the number of authors (5.2 to 7.8), institutions (2.9 to 4.8), female authors (20 to 42%, first authorship 17 to 29%, last authorship), and Hirsch-indices (10 to 14, first authorship 16 to 28, last authorship). Risk of bias remained present in most RCTs but decreased over time for the domains allocation concealment (63 to 51%), random sequence generation (57 to 36%), and blinding of outcome assessment (58 to 52%). Trial registration (37 to 47%) and CONSORT (1 to 20%) rapidly increased in the latest period. In journals with higher impact factor ( ), risk of bias was consistently lower, higher levels of trial registration more frequent, and mentioning CONSORT. The likelihood of bias in RCTs has generally decreased over the last decades. This may be driven by increased knowledge and improved education, augmented by mandatory trial registration, and more stringent reporting guidelines and journal requirements. Nevertheless, relatively high probabilities of bias remain, particularly in journals with lower impact factors. This emphasizes that further improvement of RCT registration, conduct, and reporting is still urgently needed. This study was funded by The Netherlands Organisation for Health Research and Development (445001002).
Publisher: Springer Science and Business Media LLC
Date: 17-07-2013
Publisher: Springer Science and Business Media LLC
Date: 12-1998
Abstract: The Utility-based Quality of Life--Heart Questionnaire (UBQ-H) is a cardiovascular extension of the Health Measurement Questionnaire. It is a multidimensional instrument that can be scored to yield a utility estimate using the Rosser Index and a classification algorithm developed for the Health Measurement Questionnaire. The aim of this study was to employ a statistical modelling approach to devise an improved scoring system. A s le of 201 cardiovascular patients completed the UBQ-H and assessed the utility of their own health state using standard gamble and time trade-off questions in an interview. Two new scoring methods were devised by regressing the UBQ-H data against patients' self-assessed utilities. The new methods gave utility estimates that correlated with angina/dyspnoea grades, life satisfaction scores and General Health Questionnaire (GHQ) scores. In a second s le of 1,112 cardiovascular patients, the UBQ-H utilities were able to distinguish between patients who had/had not experienced an adverse event (e.g. myocardial infarction) and were responsive to changes in health over time. The new scoring methods were not particularly more sensitive to quality of life effects than the original method based on the Rosser Index. However, they produced significantly lower estimates and more accurately reflected patients' self-assessed utilities.
Publisher: Public Library of Science (PLoS)
Date: 17-03-2023
DOI: 10.1371/JOURNAL.PONE.0281308
Abstract: High quality clinical research that addresses important questions requires significant resources. In resource-constrained environments, projects will therefore need to be prioritized. The Australia and New Zealand Musculoskeletal (ANZMUSC) Clinical Trials Network aimed to develop a stakeholder-based, transparent, easily implementable tool that provides a score for the ‘importance’ of a research question which could be used to rank research projects in order of importance. Using a mixed-methods, multi-stage approach that included a Delphi survey, consensus workshop, inter-rater reliability testing, validity testing and calibration using a discrete-choice methodology, the Research Question Importance Tool (ANZMUSC-RQIT) was developed. The tool incorporated broad stakeholder opinion, including consumers, at each stage and is designed for scoring by committee consensus. The ANZMUSC-RQIT tool consists of 5 dimensions (compared to 6 dimensions for an earlier version of RQIT): (1) extent of stakeholder consensus, (2) social burden of health condition, (3) patient burden of health condition, (4) anticipated effectiveness of proposed intervention, and (5) extent to which health equity is addressed by the research. Each dimension is assessed by defining ordered levels of a relevant attribute and by assigning a score to each level. The scores for the dimensions are then summed to obtain an overall ANZMUSC-RQIT score, which represents the importance of the research question. The result is a score on an interval scale with an arbitrary unit, ranging from 0 (minimal importance) to 1000. The ANZMUSC-RQIT dimensions can be reliably ordered by committee consensus (ICC 0.73–0.93) and the overall score is positively associated with citation count (standardised regression coefficient 0.33, p .001) and journal impact factor group (OR 6.78, 95% CI 3.17 to 14.50 for 3rd tertile compared to 1 st tertile of ANZMUSC-RQIT scores) for 200 published musculoskeletal clinical trials. We propose that the ANZMUSC-RQIT is a useful tool for prioritising the importance of a research question.
Publisher: Public Library of Science (PLoS)
Date: 12-08-2014
Publisher: Elsevier BV
Date: 07-2014
DOI: 10.1016/J.JCLINEPI.2013.11.015
Abstract: Reports of randomized controlled trials (RCTs) should set findings within the context of previous research. The resulting network of citations would also provide an alternative search method for clinicians, researchers, and systematic reviewers seeking to base decisions on all available evidence. We sought to determine the connectedness of citation networks of RCTs by examining direct (referenced trials) and indirect (through references of referenced trials, etc) citation of trials to one another. Meta-analyses were used to create citation networks of RCTs addressing the same clinical questions. The primary measure was the proportion of networks where following citation links between RCTs identifies the complete set of RCTs, forming a single connected citation group. Other measures included the number of disconnected groups (islands) within each network, the number of citations in the network relative to the maximum possible, and the maximum number of links in the path between two connected trials (a measure of indirectness of citations). We included 259 meta-analyses with a total of 2,413 and a median of seven RCTs each. For 46% (118 of 259) of networks, the RCTs formed a single connected citation group-one island. For the other 54% of networks, where at least one RCT group was not cited by others, 39% had two citation islands and 4% (10 of 257) had 10 or more islands. On average, the citation networks had 38% of the possible citations to other trials (if each trial had cited all earlier trials). The number of citation islands and the maximum number of citation links increased with increasing numbers of trials in the network. Available evidence to answer a clinical question may be identified by using network citations created with a small initial corpus of eligible trials. However, the number of islands means that citation networks cannot be relied on for evidence retrieval.
Publisher: BMJ
Date: 08-2021
DOI: 10.1136/BMJOPEN-2020-046175
Abstract: To compare the effectiveness of hand hygiene using alcohol-based hand sanitiser to soap and water for preventing the transmission of acute respiratory infections (ARIs) and to assess the relationship between the dose of hand hygiene and the number of ARI, influenza-like illness (ILI) or influenza events. Systematic review and meta-analysis. Cochrane Central Register of Controlled Trials (CENTRAL), PubMed, Embase, Cumulative Index of Nursing and Allied Health Literature (CINAHL) and trial registries were searched in April 2020. We included randomised controlled trials that compared a community-based hand hygiene intervention (soap and water, or sanitiser) with a control, or trials that compared sanitiser with soap and water, and measured outcomes of ARI, ILI or laboratory-confirmed influenza or related consequences. Two review authors independently screened the titles and abstracts for inclusion and extracted data. Eighteen trials were included. When meta-analysed, three trials of soap and water versus control found a non-significant increase in ARI events (risk ratio (RR) 1.23, 95% CI 0.78 to 1.93) six trials of sanitiser versus control found a significant reduction in ARI events (RR 0.80, 95% CI 0.71 to 0.89). When hand hygiene dose was plotted against ARI relative risk, no clear dose–response relationship was observable. Four trials were head-to-head comparisons of sanitiser and soap and water but too heterogeneous to pool: two found a significantly greater reduction in the sanitiser group compared with the soap group and two found no significant difference between the intervention arms. Adequately performed hand hygiene, with either soap or sanitiser, reduces the risk of ARI virus transmission however, direct and indirect evidence suggest sanitiser might be more effective in practice.
Publisher: Elsevier BV
Date: 07-2001
DOI: 10.1016/S0735-1097(01)01360-2
Abstract: We developed a prognostic strategy for quantifying the long-term risk of coronary heart disease (CHD) events in survivors of acute coronary syndromes (ACS). Strategies for quantifying long-term risk of CHD events have generally been confined to primary prevention settings. The Long-term Intervention with Pravastatin in Ischemic Disease (LIPID) study, which demonstrated that pravastatin reduces CHD events in ACS survivors with a broad range of cholesterol levels, enabled assessment of long-term prognosis in a secondary prevention setting. Based on outcomes in 8,557 patients in the LIPID study, a multivariate risk factor model was developed for prediction of CHD death or nonfatal myocardial infarction. Prognostic indexes were developed based on the model, and low-, medium-, high- and very high-risk groups were defined by categorizing the prognostic indexes. In addition to pravastatin treatment, the independently significant risk factors included: total and high density lipoprotein cholesterol, age, gender, smoking status, qualifying ACS, prior coronary revascularization, diabetes mellitus, hypertension and prior stroke. Pravastatin reduced coronary event rates in each risk level, and the relative risk reduction did not vary significantly between risk levels. The predicted five-year coronary event rates ranged from 5% to 19% for those assigned pravastatin and from 6.4% to 23.6% for those assigned placebo. Long-term prognosis of ACS survivors varied substantially according to conventional risk factor profile. Pravastatin reduced coronary risk within all risk levels however, absolute risk remained high in treated patients with unfavorable profiles. Our risk stratification strategy enables identification of ACS survivors who remain at very high risk despite statin therapy.
Publisher: BMJ
Date: 04-2006
DOI: 10.1136/EBM.11.2.35
Publisher: Springer Science and Business Media LLC
Date: 04-05-2020
DOI: 10.1186/S12916-020-01563-4
Abstract: Healthcare represents a paradox. While change is everywhere, performance has flatlined: 60% of care on average is in line with evidence- or consensus-based guidelines, 30% is some form of waste or of low value, and 10% is harm. The 60-30-10 Challenge has persisted for three decades. Current top-down or chain-logic strategies to address this problem, based essentially on linear models of change and relying on policies, hierarchies, and standardisation, have proven insufficient. Instead, we need to marry ideas drawn from complexity science and continuous improvement with proposals for creating a deep learning health system. This dynamic learning model has the potential to assemble relevant information including patients’ histories, and clinical, patient, laboratory, and cost data for improved decision-making in real time, or close to real time. If we get it right, the learning health system will contribute to care being more evidence-based and less wasteful and harmful. It will need a purpose-designed digital backbone and infrastructure, apply artificial intelligence to support diagnosis and treatment options, harness genomic and other new data types, and create informed discussions of options between patients, families, and clinicians. While there will be many variants of the model, learning health systems will need to spread, and be encouraged to do so, principally through diffusion of innovation models and local adaptations. Deep learning systems can enable us to better exploit expanding health datasets including traditional and newer forms of big and smaller-scale data, e.g. genomics and cost information, and incorporate patient preferences into decision-making. As we envisage it, a deep learning system will support healthcare’s desire to continually improve, and make gains on the 60-30-10 dimensions. All modern health systems are awash with data, but it is only recently that we have been able to bring this together, operationalised, and turned into useful information by which to make more intelligent, timely decisions than in the past.
Publisher: BMJ
Date: 30-05-1998
DOI: 10.1136/BMJ.316.7145.1660
Abstract: Ethylene is a key phytohormone that regulates the ripening of climacteric fruits, and methionine is an indirect precursor of ethylene. However, whether methionine synthase plays a role in fruit ripening in
Publisher: BMJ
Date: 06-01-2005
Publisher: Informa UK Limited
Date: 2009
DOI: 10.3109/01421590903199650
Abstract: The evidence-based medicine (EBM) approach to clinical practice has been incorporated into medical training around the world. Whilst EBM is a component of the 'foundation years' (FY) programme, it appears to lack a firm foundation in the UK undergraduate curriculum. To identify whether the teaching of EBM is adequately supported by the guideline 'Tomorrow's Doctors' (TD-2003). We mapped TD-2003 against the five steps of EBM and also reviewed the literature for reports concerning the introduction of EBM into undergraduate curricula. Whilst all five steps of EBM can be mapped against TD-2003, the guidance makes no explicit reference to EBM and a coherent framework is lacking. The focus of undergraduate EBM teaching should be on 'using' research evidence (rather than undertaking research). The current emphasis on 'therapy' should be expanded to include the EBM-related issues of 'diagnosis, prognosis and harm'. UK medical schools also need to exploit the NHS investment in 'national electronic libraries'.
Publisher: Informa UK Limited
Date: 27-07-2023
Publisher: Center for Open Science
Date: 21-12-2022
Abstract: BackgroundIn 2013, the Australian Diabetes In Pregnancy Society (ADIPS) recommended an expanded definition for gestational diabetes (GDM). The RACGP, among other representative groups, questioned the evidence for change and continues to recommend the previous 2-step testing process using higher glucose cut-offs. ObjectiveThe changed definition has doubled the proportion of women diagnosed with GDM in Australia, despite Australian non-randomized studies showing no improvement in outcomes.We examined four recent large randomised trials – in the USA, New Zealand, and Iran - which evaluated the impact of GDM definitions on patient-relevant outcomes. All studies found increased numbers of women diagnosed but none found improvement in perinatal or maternal outcomes.DiscussionGiven these trials’ results showing no improvement in outcomes but increased treatment, costs, concerns, and inconvenience to women, the RACGP and others should review the diagnosis of gestational diabetes. Meanwhile general practitioners should continue to offer women the 2-step process.
Publisher: Public Library of Science (PLoS)
Date: 20-03-2018
Publisher: Cold Spring Harbor Laboratory
Date: 19-06-2020
DOI: 10.1101/2020.06.16.20133207
Abstract: To identify, appraise, and synthesise studies evaluating the downsides of wearing facemasks in any setting. We also discuss potential strategies to mitigate these downsides. PubMed, Embase, CENTRAL, EuropePMC were searched (inception-18/5/2020), and clinical registries were searched via CENTRAL. We also did forward-backward citation search of the included studies. We included randomised controlled trials and observational studies comparing facemask use to any active intervention or to control. Two author pairs independently screened articles for inclusion, extracted data and assessed the quality of included studies. The primary outcomes were compliance, discomforts, harms, and adverse events of wearing facemasks. We screened 5471 articles, including 37 (40 references) 11 were meta-analysed. For mask wear adherence, 47% more people wore facemasks in the facemask group compared to control adherence was significantly higher (26%) in the surgical/medical mask group than in N95/P2 group. The largest number of studies reported on the discomfort and irritation outcome (20-studies) fewest reported on the misuse of masks, and none reported on mask contamination or risk compensation behaviour. Risk of bias was generally high for blinding of participants and personnel and low for attrition and reporting biases. There are insufficient data to quantify all of the adverse effects that might reduce the acceptability, adherence, and effectiveness of face masks. New research on facemasks should assess and report the harms and downsides. Urgent research is also needed on methods and designs to mitigate the downsides of facemask wearing, particularly the assessment of alternatives such as face shields.
Publisher: BMJ
Date: 18-11-1995
DOI: 10.1136/BMJ.311.7016.1356
Abstract: To which groups of patients can the results of clinical trials be applied? This question is often inappropriately answered by reference to the trial entry criteria. Instead, the benefit and harm (adverse events, discomfort of treatment, etc) of treatment could be assessed separately for in idual patients. Patients at greatest risk of a disease will have the greatest net benefit as benefit to patients usually increases with risk while harm remains comparatively fixed. To assess net benefit, the relative risks should come from (a meta-analysis of) randomised trials the risk in in idual patients should come from multivariate risk equations derived from cohort studies. However, before making firm conclusions, the assumptions of fixed adverse effects and constant reduction in relative risk need to be checked.
Publisher: Massachusetts Medical Society
Date: 05-05-2011
DOI: 10.1056/NEJMC1102207
Publisher: Elsevier BV
Date: 08-2019
Publisher: Wiley
Date: 22-09-2008
Publisher: BMJ
Date: 18-10-2011
Publisher: Springer Science and Business Media LLC
Date: 08-2017
Publisher: BMJ
Date: 04-08-2014
Publisher: Royal College of General Practitioners
Date: 09-2011
Publisher: Massachusetts Medical Society
Date: 05-11-1998
Publisher: BMJ
Date: 03-07-2012
DOI: 10.1136/BMJ.E4355
Publisher: BMJ
Date: 28-10-2004
Publisher: Annals of Family Medicine
Date: 03-2021
DOI: 10.1370/AFM.2609
Publisher: Ubiquity Press, Ltd.
Date: 12-08-2022
DOI: 10.5334/GH.1142
Publisher: Springer Science and Business Media LLC
Date: 04-05-2020
DOI: 10.1186/S13643-020-01351-4
Abstract: The fourth meeting of the International Collaboration for Automation of Systematic Reviews (ICASR) was held 5–6 November 2019 in The Hague, the Netherlands. ICASR is an interdisciplinary group whose goal is to maximize the use of technology for conducting rapid, accurate, and efficient systematic reviews of scientific evidence. The group seeks to facilitate the development and acceptance of automated techniques for systematic reviews. In 2018, the major themes discussed were the transferability of automation tools (i.e., tools developed for other purposes that might be used by systematic reviewers), the automated recognition of study design in multiple disciplines and applications, and approaches for the evaluation of automation tools.
Publisher: Wiley
Date: 02-07-2018
Abstract: Deliberate clinical inertia is the art of doing nothing as a positive response. To be able to apply this concept, in idual clinicians need to specifically focus on their clinical decision-making. The skill of solving problems and making optimal clinical decisions requires more attention in medical training and should play a more prominent part of the medical curriculum. This paper provides suggestions on how this may be achieved. Strategies to mitigate common biases are outlined, with an emphasis on reversing a 'more is better' culture towards more temperate, critical thinking. To incorporate such an approach in medical curricula and in clinical practice, institutional endorsement and support is required.
Publisher: BMJ
Date: 24-04-2009
DOI: 10.1136/BMJ.B1312
Publisher: Wiley
Date: 05-2015
DOI: 10.1111/JEBM.12155
Abstract: Testing Treatments is a book written to help everyone understand why testing treatments is so important, why treatment tests have to be fair, and how everyone can help to promote better research for better health care. The book proved to be very popular and its second edition has already been translated into a dozen languages, with more translations in the pipeline. The texts of the original English and all the translations are feely downloadable from Testing Treatments interactive at www.testingtreatments.org. The editors of all the different language websites have established an TTi Editorial Alliance, to share experiences and provide each other with mutual support. The TTi Editorial Alliance seeks to promote a world in which health professionals, patients and the public use reliable research to inform their health decisions. Its missions are (i) To promote a global network, involving members of the public in partnership with professionals, to communicate and discuss basic principles and general knowledge about testing treatments (ii) to help the public increase critical thinking and skills in accessing, apprehending, appraising and using research evidence and (iii) to help patients and the public to participate more actively in health research.
Publisher: Ovid Technologies (Wolters Kluwer Health)
Date: 05-11-2019
Abstract: Information is scarce regarding effects of antihypertensive medication on blood pressure variability ( BPV ) and associated clinical outcomes. We examined whether antihypertensive treatment changes BPV over time and whether such change (decline or increase) has any association with long‐term mortality in an elderly hypertensive population. We used data from a subset of participants in the Second Australian National Blood Pressure study (n=496) aged ≥65 years who had 24‐hour ambulatory blood pressure recordings at study entry (baseline) and then after a median of 2 years while on treatment (follow‐up). Weighted day‐night systolic BPV was calculated for both baseline and follow‐up as a weighted mean of daytime and nighttime blood pressure standard deviations. The annual rate of change in BPV over time was calculated from these BPV estimates. Furthermore, we classified both BPV estimates as high and low based on the baseline median BPV value and then classified BPV changes into stable: low BPV , stable: high BPV , decline: high to low , and increase: low to high . We observed an annual decline (mean± SD : −0.37±1.95 95% CI, −0.54 to −0.19 P .001) in weighted day‐night systolic BPV between baseline and follow‐up. Having constant stable: high BPV was associated with an increase in all‐cause mortality (hazard ratio: 3.03 95% CI, 1.67–5.52) and cardiovascular mortality (hazard ratio: 3.70 95% CI, 1.62–8.47) in relation to the stable: low BPV group over a median 8.6 years after the follow‐up ambulatory blood pressure monitoring. Similarly, higher risk was observed in the decline: high to low group. Our results demonstrate that in elderly hypertensive patients, average BPV declined over 2 years of follow‐up after initiation of antihypertensive therapy, and having higher BPV (regardless of any change) was associated with increased long‐term mortality.
Publisher: SAGE Publications
Date: 03-2013
Publisher: Elsevier BV
Date: 02-2006
DOI: 10.1016/J.SUC.2005.10.016
Abstract: The problem of lack of transfer of knowledge in surgery is well illustrated by the variations in surgical practice across areas and countries. Surgery is not unique in this respect, and such variations have been documented in virtually all specialties and in primary care. The first issue is recognition that there is an inescapable and growing information problem. Unless we focus some of our research and practice effort on better organizing, filtering, and using the research that we have, the gap between what we know and what we do will continue to grow.
Publisher: AMPCo
Date: 11-2002
Publisher: American Medical Association (AMA)
Date: 07-2017
DOI: 10.1001/JAMAINTERNMED.2017.1302
Abstract: No guidelines exist currently for guideline panels and others considering changes to disease definitions. Panels frequently widen disease definitions, increasing the proportion of the population labeled as unwell and potentially causing harm to patients. We set out to develop a checklist of issues, with guidance, for panels to consider prior to modifying a disease definition. We assembled a multidisciplinary, multicontinent working group of 13 members, including members from the Guidelines International Network, Grading of Recommendations Assessment, Development and Evaluation working group, and the World Health Organisation. We used a 5-step process to develop the checklist: (1) a literature review of issues, (2) a draft outline document, (3) a Delphi process of feedback on the list of issues, (4) a 1-day face-to-face meeting, and (5) further refinement of the checklist. The literature review identified 12 potential issues. From these, the group developed an 8-item checklist that consisted of definition changes, number of people affected, trigger, prognostic ability, disease definition precision and accuracy, potential benefits, potential harms, and the balance between potential harms and benefits. The checklist is accompanied by an explanation of each item and the types of evidence to assess each one. We used a panel's recent consideration of a proposed change in the definition of gestational diabetes mellitus (GDM) to illustrate use of the checklist. We propose that the checklist be piloted and validated by groups developing new guidelines. We anticipate that the use of the checklist will be a first step to guidance and better documentation of definition changes prior to introducing modified disease definitions.
Publisher: Springer Science and Business Media LLC
Date: 26-03-2021
DOI: 10.1186/S13063-021-05185-W
Abstract: The translation of evidence from clinical trials into practice is complex. One approach to facilitating this translation is to consider the ‘implementability’ of trials as they are designed and conducted. Implementability of trials refers to characteristics of the design, execution and reporting of a late-phase clinical trial that can influence the capacity for the evidence generated by that trial to be implemented. On behalf of the Australian Clinical Trials Alliance (ACTA), the national peak body representing networks of clinician researchers conducting investigator-initiated clinical trials, we conducted a pragmatic literature review to develop a concept map of implementability. Documents were included in the review if they related to the design, conduct and reporting of late-phase clinical trials described factors that increased or decreased the capacity of trials to be implemented and were published after 2009 in English. Eligible documents included systematic reviews, guidance documents, tools or primary studies (if other designs were not available). With an expert reference group, we developed a preliminary concept map and conducted a snowballing search based on known relevant papers and websites of key organisations in May 2019. Sixty-five resources were included. A final map of 38 concepts was developed covering the domains of validity, relevance and usability across the design, conduct and reporting of a trial. The concepts drew on literature relating to implementation science, consumer engagement, pragmatic trials, reporting, research waste and other fields. No single resource addressed more than ten of the 38 concepts in the map. The concept map provides trialists with a tool to think through a range of areas in which practical action could enhance the implementability of their trials. Future work could validate the strength of the associations between the concepts identified and implementability of trials and investigate the effectiveness of steps to address each concept. ACTA will use this concept map to develop guidance for trialists in Australia. This review did not include health-related outcomes and was therefore not eligible for registration in the PROSPERO register.
Publisher: BMJ
Date: 02-07-2013
DOI: 10.1136/BMJ.F4247
Publisher: AMPCo
Date: 07-2014
DOI: 10.5694/MJA14.00002
Abstract: Shared decision making enables a clinician and patient to participate jointly in making a health decision, having discussed the options and their benefits and harms, and having considered the patient's values, preferences and circumstances. It is not a single step to be added into a consultation, but a process that can be used to guide decisions about screening, investigations and treatments. The benefits of shared decision making include enabling evidence and patients' preferences to be incorporated into a consultation improving patient knowledge, risk perception accuracy and patient-clinician communication and reducing decisional conflict, feeling uninformed and inappropriate use of tests and treatments. Various approaches can be used to guide clinicians through the process. We elaborate on five simple questions that can be used: What will happen if the patient waits and watches? What are the test or treatment options? What are the benefits and harms of each option? How do the benefits and harms weigh up for the patient? Does the patient have enough information to make a choice? Although shared decision making can occur without tools, various types of decision support tools now exist to facilitate it. Misconceptions about shared decision making are h ering its implementation. We address the barriers, as perceived by clinicians. Despite numerous international initiatives to advance shared decision making, very little has occurred in Australia. Consequently, we are lagging behind many other countries and should act urgently.
Publisher: Public Library of Science (PLoS)
Date: 21-09-2010
Publisher: Elsevier BV
Date: 2015
Publisher: Elsevier BV
Date: 12-2014
DOI: 10.1016/J.JCLINEPI.2014.06.011
Abstract: Systematic reviews (SRs) are the cornerstone of evidence-based medicine. In this study, we evaluated the effectiveness of using two computer screens on the efficiency of conducting SRs. A cohort of reviewers before and after using dual monitors were compared with a control group that did not use dual monitors. The outcomes were time spent for abstract screening, full-text screening and data extraction, and inter-rater agreement. We adopted multivariate difference-in-differences linear regression models. A total of 60 SRs conducted by 54 reviewers were included in this analysis. We found a significant reduction of 23.81 minutes per article in data extraction in the intervention group relative to the control group (95% confidence interval: -46.03, -1.58, P = 0.04), which was a 36.85% reduction in time. There was no significant difference in time spent on abstract screening, full-text screening, or inter-rater agreement between the two groups. Using dual monitors when conducting SRs is associated with significant reduction of time spent on data extraction. No significant difference was observed on time spent on abstract screening or full-text screening. Using dual monitors is one strategy that may improve the efficiency of conducting SRs.
Publisher: American College of Physicians
Date: 11-2001
Publisher: Public Library of Science (PLoS)
Date: 09-04-2013
Publisher: Elsevier BV
Date: 11-2011
Publisher: Elsevier BV
Date: 09-2007
Publisher: American Medical Association (AMA)
Date: 06-09-2022
Publisher: JMIR Publications Inc.
Date: 12-12-2019
Abstract: he ubiquity of smartphones and health apps make them a potential self-management tool for patients that could be prescribed by medical professionals. However, little is known about how Australian general practitioners and their patients view the possibility of prescribing mobile health (mHealth) apps as a nondrug intervention. his study aimed to determine barriers and facilitators to prescribing mHealth apps in Australian general practice from the perspective of general practitioners and their patients. e conducted semistructured interviews in Australian general practice settings with purposively s led general practitioners and patients. The audio-recorded interviews were transcribed, coded, and thematically analyzed by two researchers. nterview participants included 20 general practitioners and 15 adult patients. General practitioners’ perceived barriers to prescribing apps included a generational difference in the digital propensity for providers and patients lack of knowledge of prescribable apps and trustworthy sources to access them the time commitment required of providers and patients to learn and use the apps and concerns about privacy, safety, and trustworthiness of health apps. General practitioners perceived facilitators as trustworthy sources to access prescribable apps and information, and younger generation and widespread smartphone ownership. For patients, the main barriers were older age and usability of mHealth apps. Patients were not concerned about privacy and data safety issues regarding health app use. Facilitators for patients included the ubiquity of smartphones and apps, especially for the younger generation and recommendation of apps by doctors. We identified evidence of effectiveness as an independent theme from both the provider and patient perspectives. Health app prescription appears to be feasible in general practice. The barriers and facilitators identified by the providers and patients overlapped, though privacy was of less concern to patients. The involvement of health professionals and patients is vital for the successful integration of effective, evidence-based mHealth apps with clinical practice. >
Publisher: Springer Science and Business Media LLC
Date: 12-2013
Publisher: Wiley
Date: 04-2014
Publisher: AMPCo
Date: 23-10-2017
DOI: 10.5694/MJA17.00574
Abstract: In Australia, the antibiotic resistance crisis may be partly alleviated by reducing antibiotic use in general practice, which has relatively high prescribing rates - antibiotics are mostly prescribed for acute respiratory infections, for which they provide only minor benefits. Current surveillance is inadequate for monitoring community antibiotic resistance rates, prescribing rates by indication, and serious complications of acute respiratory infections (which antibiotic use earlier in the infection may have averted), making target setting difficult. Categories of interventions that may support general practitioners to reduce prescribing antibiotics are: regulatory (eg, changing the default to "no repeats" in electronic prescribing, changing the packaging of antibiotics to facilitate tailored amounts of antibiotics for the right indication and restricting access to prescribing selected antibiotics to conserve them), externally administered (eg, academic detailing and audit and feedback on total antibiotic use for in idual GPs), interventions that GPs can in idually implement (eg, delayed prescribing, shared decision making, public declarations in the practice about conserving antibiotics, and self-administered audit), supporting GPs' access to near-patient diagnostic testing, and public awareness c aigns. Many unanswered clinical research questions remain, including research into optimal implementation methods. Reducing antibiotic use in Australian general practice will require a range of approaches (with various intervention categories), a sustained effort over many years and a commitment of appropriate resources and support.
Publisher: Springer Science and Business Media LLC
Date: 04-2023
DOI: 10.1038/S41591-023-02268-W
Abstract: Perivascular space (PVS) burden is an emerging, poorly understood, magnetic resonance imaging marker of cerebral small vessel disease, a leading cause of stroke and dementia. Genome-wide association studies in up to 40,095 participants (18 population-based cohorts, 66.3 ± 8.6 yr, 96.9% European ancestry) revealed 24 genome-wide significant PVS risk loci, mainly in the white matter. These were associated with white matter PVS already in young adults ( N = 1,748 22.1 ± 2.3 yr) and were enriched in early-onset leukodystrophy genes and genes expressed in fetal brain endothelial cells, suggesting early-life mechanisms. In total, 53% of white matter PVS risk loci showed nominally significant associations (27% after multiple-testing correction) in a Japanese population-based cohort ( N = 2,862 68.3 ± 5.3 yr). Mendelian randomization supported causal associations of high blood pressure with basal ganglia and hippoc al PVS, and of basal ganglia PVS and hippoc al PVS with stroke, accounting for blood pressure. Our findings provide insight into the biology of PVS and cerebral small vessel disease, pointing to pathways involving extracellular matrix, membrane transport and developmental processes, and the potential for genetically informed prioritization of drug targets.
Publisher: BMJ
Date: 24-05-1997
DOI: 10.1136/BMJ.314.7093.1526
Abstract: To determine the effect of antibiotic treatment for acute otitis media in children. Systematic search of the medical literature to identify studies that used antibiotics in randomised controlled trials to treat acute otitis media. Studies were examined blind, and the results of those of satisfactory quality of methodology were pooled. Six studies of children aged 7 months to 15 years. Pain, deafness, and other symptoms related to acute otitis media or antibiotic treatment. 60% of placebo treated children were pain free within 24 hours of presentation, and antibiotics did not influence this. However, at 2-7 days after presentation, by which time only 14% of children in control groups still had pain, early use of antibiotics reduced the risk of pain by 41% (95% confidence interval 14% to 60%). Antibiotics reduced contralateral acute otitis media by 43% (9% to 64%). They seemed to have no influence on subsequent attacks of otitis media or deafness at one month, although there was a trend for improvement of deafness at three months. Antibiotics were associated with a near doubling of the risk of vomiting, diarrhoea, or rashes (odds ratio 1.97 (1.19 to 3.25)). Early use of antibiotics provides only modest benefit for acute otitis media: to prevent one child from experiencing pain by 2-7 days after presentation, 17 children must be treated with antibiotics early.
Publisher: Springer Science and Business Media LLC
Date: 23-09-2022
DOI: 10.1007/S40670-022-01637-3
Abstract: The incidence of musculoskeletal disease is increasing in Australia and around the world. However, medical student education does not necessarily reflect current and projected trends in musculoskeletal medicine. The aim of this study was to assess junior doctors’ competency in musculoskeletal medicine using the Freedman and Bernstein Basic Competency Examination in Musculoskeletal Medicine questionnaire. We conducted a cohort study of interns (first year post medical school) across four teaching hospitals in Australia. Interns were asked to take the Freedman and Bernstein examination during organised intern teaching sessions, and results were analysed using the original Freedman and Bernstein marking criteria and validated pass mark. The mean score for the 92 interns was 13.9 out of 25 (55%) with scores ranging from 8 to 20.8 (29–83%). Only 8 of the 92 interns (8.7%) achieved a score of greater than 73%, the pre-specified pass mark. Our study identifies inadequacies in musculoskeletal medical knowledge in Australian interns. Review of undergraduate medical education may be required to reflect current and predicted trends in the prevalence of musculoskeletal disease and adequately prepare junior doctors.
Publisher: American Medical Association (AMA)
Date: 25-09-1991
Publisher: Royal College of General Practitioners
Date: 31-01-2019
Publisher: SAGE Publications
Date: 08-1986
DOI: 10.1177/0272989X8600600305
Abstract: Unmanageably bushy decision trees result when a decision analysis involves several in vestigations. They can be simplified for riskless tests by deriving the maximum expected utility decision table for the problem as an intermediate step. This table can be logically summarized as Boolean expressions involving the tests. A minimum-cost testing sequence may then be found by manipulation of the Boolean formulas. The relationship between the resulting decision criteria and the receiver operating characteristic is shown.
Publisher: BMJ
Date: 24-01-2018
DOI: 10.1136/EBMED-2017-110829
Abstract: Many claims about the effects of treatments, though well intentioned, are wrong. Indeed, they are sometimes deliberately misleading to serve interests other than the well-being of patients and the public. People need to know how to spot unreliable treatment claims so that they can protect themselves and others from harm. The ability to assess the trustworthiness of treatment claims is often lacking. Acquiring this ability depends on being familiar with, and correctly applying, some key concepts, for ex le, that’ association is not the same as causation.’ The Informed Health Choices (IHC) Project has identified 36 such concepts and shown that people can be taught to use them in decision making. A randomised trial in Uganda, for ex le, showed that primary school children with poor reading skills could be taught to apply 12 of the IHC Key Concepts. The list of IHC Key Concepts has proven to be effective in providing a framework for developing and evaluating IHC resources to help children to think critically about treatment claims. The list also provides a framework for retrieving, coding and organising other teaching and learning materials for learners of any age. It should help teachers, researchers, clinicians, and patients to structure critical thinking about the trustworthiness of claims about treatment effects.
Publisher: BMJ
Date: 08-03-2019
DOI: 10.1136/BMJ.L808
Publisher: American Roentgen Ray Society
Date: 07-2003
DOI: 10.2214/AJR.181.1.1810051
Abstract: To improve the accuracy and completeness of reporting of studies of diagnostic accuracy in order to allow readers to assess the potential for bias in a study and to evaluate the generalisability of its results. The Standards for Reporting of Diagnostic Accuracy (STARD) steering committee searched the literature to identify publications on the appropriate conduct and reporting of diagnostic studies and extracted potential items into an extensive list. Researchers, editors, and members of professional organisations shortened this list during a 2-day consensus meeting with the goal of developing a checklist and a generic flow diagram for studies of diagnostic accuracy. The search for published guidelines about diagnostic research yielded 33 previously published checklists, from which we extracted a list of 75 potential items. At the consensus meeting, participants shortened the list to a 25-item checklist, by using evidence whenever available. A prototype of a flow diagram provides information about the method of recruitment of patients, the order of test execution and the numbers of patients undergoing the test under evaluation, the reference standard, or both. Evaluation of research depends on complete and accurate reporting. If medical journals adopt the checklist and the flow diagram, the quality of reporting of studies of diagnostic accuracy should improve to the advantage of clinicians, researchers, reviewers, journals, and the public.
Publisher: Wiley
Date: 15-01-2014
Abstract: Risk scores and accelerated diagnostic protocols can identify chest pain patients with low risk of major adverse cardiac event who could be discharged early from the ED, saving time and costs. We aimed to derive and validate a chest pain score and accelerated diagnostic protocol (ADP) that could safely increase the proportion of patients suitable for early discharge. Logistic regression identified statistical predictors for major adverse cardiac events in a derivation cohort. Statistical coefficients were converted to whole numbers to create a score. Clinician feedback was used to improve the clinical plausibility and the usability of the final score (Emergency Department Assessment of Chest pain Score [EDACS]). EDACS was combined with electrocardiogram results and troponin results at 0 and 2 h to develop an ADP (EDACS-ADP). The score and EDACS-ADP were validated and tested for reproducibility in separate cohorts of patients. In the derivation (n = 1974) and validation (n = 608) cohorts, the EDACS-ADP classified 42.2% (sensitivity 99.0%, specificity 49.9%) and 51.3% (sensitivity 100.0%, specificity 59.0%) as low risk of major adverse cardiac events, respectively. The intra-class correlation coefficient for categorisation of patients as low risk was 0.87. The EDACS-ADP identified approximately half of the patients presenting to the ED with possible cardiac chest pain as having low risk of short-term major adverse cardiac events, with high sensitivity. This is a significant improvement on similar, previously reported protocols. The EDACS-ADP is reproducible and has the potential to make considerable cost reductions to health systems.
Publisher: Public Library of Science (PLoS)
Date: 30-01-2019
Publisher: Wiley
Date: 15-02-2005
DOI: 10.1002/0470011815.B2A04026
Abstract: Meta‐analysis can summarize the performance of a diagnostic test based on all available high quality studies, investigate reasons for variation in test performance, and compare the performance of two or more tests. Strategies are outlined for finding all relevant primary studies, and assessing the quality of studies. Study quality, study design factors, and patient characteristics should be assessed and explored as possible sources of heterogeneity in diagnostic performance. Summary ROC analysis (SROC) is the most common meta‐analytic method for diagnostic studies. In addition, linear regression is used to model test accuracy as a function of implicit test threshold and other study level covariates. The SROC approach shows the trade‐off in sensitivity and specificity as the threshold for a positive test result varies. The more complex, hierarchical SROC model, takes account of both within and between study variability, and can be used to explore variation in both accuracy and threshold.
Publisher: SAGE Publications
Date: 08-1986
DOI: 10.1177/0272989X8600600306
Abstract: When a decision table is used to find a maximum expected utility testing strategy, it is based on a given prior probability distribution of diseases. In the two-disease situation, a threshold analysis over all prior probabilities can be done using threshold transformations of the points of indifference between treatments. This results in a set of prior probability intervals each with its own unique decision rule. The Boolean expression for the table indicates the ac ceptable testing strategies. A decision table analysis may then be extended to include invasive or costly investigations. The technique represents a saving in time and effort com pared with standard decision tree approaches, especially where investigative recommen dations are to be made for a broad range of prior probabilities, e.g., where initial symptoms and signs are considered before the investigations.
Publisher: BMJ
Date: 22-12-2008
DOI: 10.1136/BMJ.A2732
Publisher: Elsevier BV
Date: 04-2011
DOI: 10.1016/J.JCLINEPI.2010.04.026
Abstract: This article is the first of a series providing guidance for use of the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) system of rating quality of evidence and grading strength of recommendations in systematic reviews, health technology assessments (HTAs), and clinical practice guidelines addressing alternative management options. The GRADE process begins with asking an explicit question, including specification of all important outcomes. After the evidence is collected and summarized, GRADE provides explicit criteria for rating the quality of evidence that include study design, risk of bias, imprecision, inconsistency, indirectness, and magnitude of effect. Recommendations are characterized as strong or weak (alternative terms conditional or discretionary) according to the quality of the supporting evidence and the balance between desirable and undesirable consequences of the alternative management options. GRADE suggests summarizing evidence in succinct, transparent, and informative summary of findings tables that show the quality of evidence and the magnitude of relative and absolute effects for each important outcome and/or as evidence profiles that provide, in addition, detailed information about the reason for the quality of evidence rating. Subsequent articles in this series will address GRADE's approach to formulating questions, assessing quality of evidence, and developing recommendations.
Publisher: Elsevier BV
Date: 2016
Publisher: BMJ
Date: 17-01-2004
Publisher: AMPCo
Date: 06-11-2020
DOI: 10.5694/MJA2.50376
Abstract: To calculate lifetime risks of cancer diagnosis and cancer-specific death, adjusted for competing mortality, and to compare these estimates with the corresponding risks published by the Australian Institute of Health and Welfare (AIHW). Analysis of publicly available annual AIHW data on age-specific cancer incidence and mortality - for breast cancer, colorectal cancer, prostate cancer, melanoma of the skin, and lung cancer - and all-cause mortality in Australia, 1982-2013. Lifetime risks of cancer diagnosis and mortality (to age 85), adjusted for competing mortality. During 1982-2013, AIHW estimates were consistently higher than our competing mortality-adjusted estimates of lifetime risks of diagnosis and death for all five cancers. Differences between AIHW and adjusted estimates declined with time for breast cancer, prostate cancer, colorectal cancer, and lung cancer (for men only), but remained steady for lung cancer (women only) and melanoma of the skin. In 2013, the respective estimated lifetime risks of diagnosis (AIHW and adjusted) were 12.7% and 12.1% for breast cancer, 18.7% and 16.2% for prostate cancer, 9.0% and 7.0% (men) and 6.4% and 5.5% (women) for colorectal cancer, 7.5% and 6.0% (men) and 4.4% and 4.0% (women) for melanoma of the skin, and 7.6% and 5.8% (men) and 4.5% and 3.9% (women) for lung cancer. The method employed in Australia to calculate the lifetime risks of cancer diagnosis and mortality overestimates these risks, especially for men.
Publisher: Ovid Technologies (Wolters Kluwer Health)
Date: 12-2009
Publisher: Public Library of Science (PLoS)
Date: 09-12-2016
Publisher: BMJ
Date: 08-2021
DOI: 10.1136/BMJOPEN-2020-045572
Abstract: To explore factors that potentially impact external validation performance while developing and validating a prognostic model for hospital admissions (HAs) in complex older general practice patients. Using in idual participant data from four cluster-randomised trials conducted in the Netherlands and Germany, we used logistic regression to develop a prognostic model to predict all-cause HAs within a 6-month follow-up period. A stratified intercept was used to account for heterogeneity in baseline risk between the studies. The model was validated both internally and by using internal-external cross-validation (IECV). Prior HAs, physical components of the health-related quality of life comorbidity index, and medication-related variables were used in the final model. While achieving moderate discriminatory performance, internal bootstrap validation revealed a pronounced risk of overfitting. The results of the IECV, in which calibration was highly variable even after accounting for between-study heterogeneity, agreed with this finding. Heterogeneity was equally reflected in differing baseline risk, predictor effects and absolute risk predictions. Predictor effect heterogeneity and differing baseline risk can explain the limited external performance of HA prediction models. With such drivers known, model adjustments in external validation settings (eg, intercept recalibration, complete updating) can be applied more purposefully. PROSPERO id: CRD42018088129.
Publisher: American College of Physicians
Date: 20-01-2009
Publisher: Public Library of Science (PLoS)
Date: 08-01-2010
Publisher: BMJ
Date: 02-2021
DOI: 10.1136/BMJOPEN-2020-044364
Abstract: To identify, appraise and synthesise studies evaluating the downsides of wearing face masks in any setting. We also discuss potential strategies to mitigate these downsides. Systematic review and meta-analysis. PubMed, Embase, CENTRAL and EuropePMC were searched (inception–18 May 2020), and clinical registries were searched via CENTRAL. We also did a forward–backward citation search of the included studies. We included randomised controlled trials and observational studies comparing face mask use to any active intervention or to control. Two author pairs independently screened articles for inclusion, extracted data and assessed the quality of included studies. The primary outcomes were compliance, discomforts, harms and adverse events of wearing face masks. We screened 5471 articles, including 37 (40 references) 11 were meta-analysed. For mask wear adherence, 47% (95% CI 25% to 68%, p .0001), more people wore face masks in the face mask group compared with control adherence was significantly higher (26%, 95% CI 8% to 46%, p .01) in the surgical/medical mask group than in N95/P2 group. The largest number of studies reported on the discomfort and irritation outcome (20 studies) fewest reported on the misuse of masks, and none reported on mask contamination or risk compensation behaviour. Risk of bias was generally high for blinding of participants and personnel and low for attrition and reporting biases. There are insufficient data to quantify all of the adverse effects that might reduce the acceptability, adherence and effectiveness of face masks. New research on face masks should assess and report the harms and downsides. Urgent research is also needed on methods and designs to mitigate the downsides of face mask wearing, particularly the assessment of possible alternatives. Open Science Framework website osf.io/sa6kf/ (timest 20-05-2020).
Publisher: Elsevier BV
Date: 10-1999
DOI: 10.1016/S0895-4356(99)00086-4
Abstract: We present a method to estimate the summary receiver operating characteristic (SROC) curve for combining information on a diagnostic test from several different studies. Unlike previous methods that assume the reference standard to be error free, our approach allows for the possibility of errors in the reference standard, through use of a latent class model. The model provides estimates of the sensitivity and specificity of the diagnostic test and the case prevalence in each study these parameters can then be used in a meta-analysis, for ex le, using the regression method proposed by Moses et al., of a measure of test discrimination on a measure of the diagnostic threshold, to fit the SROC. The method is illustrated with an ex le on Pap smears that shows how adjusting for imperfection in the reference standard typically reduces the scatter of data in the SROC plot, and tends to indicate better performance of the test than otherwise.
Publisher: Elsevier BV
Date: 09-2004
Publisher: American Astronomical Society
Date: 03-2021
Abstract: This paper presents the gravitational-wave measurement of the Hubble constant ( H 0 ) using the detections from the first and second observing runs of the Advanced LIGO and Virgo detector network. The presence of the transient electromagnetic counterpart of the binary neutron star GW170817 led to the first standard-siren measurement of H 0 . Here we additionally use binary black hole detections in conjunction with galaxy catalogs and report a joint measurement. Our updated measurement is H 0 = 69 − 8 + 16 km s −1 Mpc −1 (68.3% of the highest density posterior interval with a flat-in-log prior) which is an improvement by a factor of 1.04 (about 4%) over the GW170817-only value of 69 − 8 + 17 km s −1 Mpc −1 . A significant additional contribution currently comes from GW170814, a loud and well-localized detection from a part of the sky thoroughly covered by the Dark Energy Survey. With numerous detections anticipated over the upcoming years, an exhaustive understanding of other systematic effects are also going to become increasingly important. These results establish the path to cosmology using gravitational-wave observations with and without transient electromagnetic counterparts.
Publisher: BMJ
Date: 09-09-2004
Publisher: Elsevier BV
Date: 2014
Publisher: BMJ
Date: 30-03-2011
Publisher: Cambridge University Press
Date: 08-11-2001
Abstract: What do we do if different studies appear to give different answers? When applying research to questions for in idual patients or for health policy, one of the challenges is interpreting such apparently conflicting research. A systematic review is a method to systematically identify relevant research, appraise its quality, and synthesize the results. The last two decades have seen increasing interest and developments in methods for doing high quality systematic reviews. Part I of this book provides a clear introduction to the concepts of reviewing, and lucidly describes the difficulties and traps to avoid. A unique feature of the book is its description, in Part II, of the different methods needed for different types of health care questions: frequency of disease, prognosis, diagnosis, risk, and management. As well as illustrative ex les, there are exercises for each of the sections. This is essential reading for those interested in synthesizing health care research.
Publisher: Oxford University Press (OUP)
Date: 15-10-1994
DOI: 10.1093/OXFORDJOURNALS.AJE.A117323
Abstract: Evaluating a screening test often requires estimation of test sensitivity and specificity with appropriately narrow confidence intervals and at least cost. If the major cost is the reference ("gold") standard, savings arise from reducing the large number of test negatives that are verified by the reference standard. On the basis of the formulae of Begg and Greenes (Biometrics 1983 :207-15), the authors determine the optimal s ling strategy for test positives and test negatives to minimize the total s le size that needs to be verified for a given confidence interval width for sensitivity. Unless sensitivity is very high, verifying more test positives and fewer test negatives than would occur with equal s ling fractions is appropriate. For ex le, if the sensitivity is 0.7 and the specificity is 0.99, the optimal s ling strategy is for 6.2% of those verified to be test positives, compared with 1.7% in the case of equal s ling fractions. At a disease prevalence of 0.01, the 3.3-fold increase in test positives results in a saving of about 15% in the test negatives and 11% in the total verified s le size. Overall, savings are about 50% for a sensitivity of 0.3, but are negligible when sensitivity is greater than 0.8. Optimal s ling strategies for sensitivity do not materially alter confidence intervals for specificity. Figures are presented from which readers can easily obtain the optimal s ling strategy given an estimate of specificity, approximated by the proportion of screenees who are test negative, and the range of likely sensitivity.
Publisher: BMJ
Date: 08-2006
DOI: 10.1136/EBM.11.4.116
Publisher: Radiological Society of North America (RSNA)
Date: 12-2015
DOI: 10.1148/RADIOL.2015151516
Abstract: Incomplete reporting has been identified as a major source of avoidable waste in biomedical research. Essential information is often not provided in study reports, impeding the identification, critical appraisal, and replication of studies. To improve the quality of reporting of diagnostic accuracy studies, the Standards for Reporting of Diagnostic Accuracy Studies (STARD) statement was developed. Here we present STARD 2015, an updated list of 30 essential items that should be included in every report of a diagnostic accuracy study. This update incorporates recent evidence about sources of bias and variability in diagnostic accuracy and is intended to facilitate the use of STARD. As such, STARD 2015 may help to improve completeness and transparency in reporting of diagnostic accuracy studies.
Publisher: Elsevier BV
Date: 12-2010
Publisher: American Astronomical Society
Date: 30-09-2019
Abstract: When formed through dynamical interactions, stellar-mass binary black holes (BBHs) may retain eccentric orbits ( e 0.1 at 10 Hz) detectable by ground-based gravitational-wave detectors. Eccentricity can therefore be used to differentiate dynamically formed binaries from isolated BBH mergers. Current template-based gravitational-wave searches do not use waveform models associated with eccentric orbits, rendering the search less efficient for eccentric binary systems. Here we present the results of a search for BBH mergers that inspiral in eccentric orbits using data from the first and second observing runs (O1 and O2) of Advanced LIGO and Advanced Virgo. We carried out the search with the coherent WaveBurst algorithm, which uses minimal assumptions on the signal morphology and does not rely on binary waveform templates. We show that it is sensitive to binary mergers with a detection range that is weakly dependent on eccentricity for all bound systems. Our search did not identify any new binary merger candidates. We interpret these results in light of eccentric binary formation models. We rule out formation channels with rates ≳100 Gpc −3 yr −1 for e 0.1, assuming a black hole mass spectrum with a power-law index ≲2.
Publisher: BMJ
Date: 07-2006
DOI: 10.1136/EBN.9.3.68
Publisher: Wiley
Date: 08-07-2009
DOI: 10.1111/J.1398-9995.2009.02083.X
Abstract: The GRADE approach to grading the quality of evidence and strength of recommendations provides a comprehensive and transparent approach for developing clinical recommendations about using diagnostic tests or diagnostic strategies. Although grading the quality of evidence and strength of recommendations about using tests shares the logic of grading recommendations for treatment, it presents unique challenges. Guideline panels and clinicians should be alert to these special challenges when using the evidence about the accuracy of tests as the basis for clinical decisions. In the GRADE system, valid diagnostic accuracy studies can provide high quality evidence of test accuracy. However, such studies often provide only low quality evidence for the development of recommendations about diagnostic testing, as test accuracy is a surrogate for patient-important outcomes at best. Inferring from data on accuracy that using a test improves outcomes that are important to patients requires availability of an effective treatment, improved patients' wellbeing through prognostic information, or - by excluding an ominous diagnosis - reduction of anxiety and the opportunity for earlier search for an alternative diagnosis for which beneficial treatment can be available. Assessing the directness of evidence supporting the use of a diagnostic test requires judgments about the relationship between test results and patient-important consequences. Well-designed and conducted studies of allergy tests in parallel with efforts to evaluate allergy treatments critically will encourage improved guideline development for allergic diseases.
Publisher: American Astronomical Society
Date: 11-09-2019
Publisher: Royal College of General Practitioners
Date: 2021
Abstract: Antibiotic overprescribing is a major concern that contributes to the problem of antibiotic resistance. To assess the effect on antibiotic prescribing in primary care of telehealth (TH) consultations compared with face-to-face (F2F). Systematic review and meta-analysis of adult or paediatric patients with a history of a community-acquired acute infection (respiratory, urinary, or skin and soft tissue). Studies were included that compared synchronous TH consultations (phone or video-based) to F2F consultations in primary care. PubMed, Embase, Cochrane CENTRAL (inception–2021), clinical trial registries and citing–cited references of included studies were searched. Two review authors independently screened the studies and extracted the data. Thirteen studies were identified. The one small randomised controlled trial (RCT) found a non-significant 25% relative increase in antibiotic prescribing in the TH group. The remaining 10 were observational studies but did not control well for confounding and, therefore, were at high risk of bias. When pooled by specific infections, there was no consistent pattern. The six studies of sinusitis — including one before–after study — showed significantly less prescribing for acute rhinosinusitis in TH consultations, whereas the two studies of acute otitis media showed a significant increase. Pharyngitis, conjunctivitis, and urinary tract infections showed non-significant higher prescribing in the TH group. Bronchitis showed no change in prescribing. The impact of TH on prescribing appears to vary between conditions, with more increases than reductions. There is insufficient evidence to draw strong conclusions, however, and higher quality research is urgently needed.
Publisher: American Medical Association (AMA)
Date: 06-2018
Publisher: AMPCo
Date: 02-2002
Publisher: BMJ
Date: 10-01-2013
DOI: 10.1136/BMJ.F139
Publisher: Springer Science and Business Media LLC
Date: 23-09-2013
Publisher: Springer Science and Business Media LLC
Date: 14-01-2015
Publisher: Bioscientifica
Date: 2018
DOI: 10.1530/ERC-17-0397
Abstract: The incidence of differentiated thyroid cancer (DTC) has rapidly increased worldwide over the last decades. It is unknown if the increase in diagnosis has been mirrored by an increase in thyroidectomy rates with the concomitant economic impact that this would have on the health care system. DTC and thyroidectomy incidence as well as DTC-specific mortality were modeled using Poisson regression in New South Wales (NSW), Australia per year and by sex. The incidence of 2002 was the point from which the increase in rates was assessed cumulatively over the subsequent decade. The economic burden of potentially avoidable thyroidectomies due to the increase in diagnosis was estimated as the product of the additional thyroidectomy procedures during a decade attributable to rates beyond those reported for 2002 and the national average hospital cost of an uncomplicated thyroidectomy in Australia. The following results were obtained. The incidence of both DTC and thyroidectomy doubled in NSW between 2003 and 2012, while the DTC-specific mortality rate remained unchanged over the same period. Based on the 2002 incidence, the projected increase over 10 years (2003–2012) in thyroidectomy procedures was 2196. This translates to an extra cost burden of over AUD$ 18,600,000 in surgery-related health care expenditure over one decade in NSW. Our findings suggest that, if this rise is solely attributable to overdetection, then the rising expenditure serves no additional purpose. Reducing unnecessary detection and a conservative approach to managing DTC are sensible and would lead to millions of dollars in savings and reduced harms to patients.
Publisher: BMJ
Date: 11-01-2021
DOI: 10.1136/MEDETHICS-2020-106785
Abstract: We conducted a survey to identify what types of health/medical research could be exempt from research ethics reviews in Australia. We surveyed Australian health/medical researchers and Human Research Ethics Committee (HREC) members. The survey asked whether respondents had previously changed or abandoned a project anticipating difficulties obtaining ethics approval, and presented eight research scenarios, asking whether these scenarios should or should not be exempt from ethics review, and to provide (optional) comments. Qualitative data were analysed thematically quantitative data in R. We received 514 responses. Forty-three per cent of respondents to whom the question applied, reported changing projects in anticipation of obstacles from the ethics review process 25% reported abandoning projects for this reason. Research scenarios asking professional staff to provide views in their area of expertise were most commonly exempted from ethics review (to prioritise systematic review topics 84%, on software strengths/weaknesses 85%) scenarios involving surplus s les (82%) and N-of-1 (single case) studies (76%) were most commonly required to undergo ethics review. HREC members were 26% more likely than researchers to require ethics review. Need for independent oversight, and low risk, were most frequently cited in support of decisions to require or exempt from ethics review, respectively. Considerable differences exist between researchers and HREC members, about when to exempt from review the research that ultimately serves the interests of patients and the public. It is widely accepted that evaluative research should be used to reduce clinical uncertainties—the same principle should apply to ethics reviews.
Publisher: SAGE Publications
Date: 13-10-2023
Publisher: Center for Open Science
Date: 23-12-2020
Abstract: Doctors are placed under significant pressure to engage in research for career progression. Our review suggests that research selection criteria for specialty training programs incentivise high-volume, CV-padding research, focusing on quantity and authorship position over research quality. These selection criteria may be unintended drivers of research waste.
Publisher: BMJ
Date: 08-2006
DOI: 10.1136/EBM.11.4.101
Publisher: Elsevier BV
Date: 12-2017
DOI: 10.1016/J.JCLINEPI.2017.09.005
Abstract: The objective of the study was to identify the critical factors that determine recommendations and other decisions about healthcare-related tests and diagnostic strategies (HCTDS). We used a qualitative descriptive approach and conducted semi-structured in-depth interviews with 24 international experts (informants) in evidence and decisions about HCTDS. Although test accuracy (TA) was the factor most commonly considered by organizations when developing recommendations about HCTDS, informants agreed that TA is necessary but rarely, if ever, sufficient and may be misleading when solely considered. The informants identified factors that are important for developing recommendations about HCTDS. Informants largely agreed that laying out the potential care pathways based on the test result is an essential early step but is rarely done in developing recommendations about HCTDS. Most informants also agreed that decision analysis could be useful for organizing the clinical, cost, and preference data relevant to the use of tests in the absence of direct evidence. However, they noted that using models is limited by the lack of resources and expertise required. Developing guidelines about HCTDS requires consideration of factors beyond TA, but implementing this may be challenging. Further development and testing of "frameworks" that can guide this process is a priority for decision makers.
Publisher: American College of Physicians
Date: 03-2006
Publisher: BMJ
Date: 06-2019
DOI: 10.1136/BMJOPEN-2018-028150
Abstract: To conduct a systematic review investigating the normal age-related changes in lung function in adults without known lung disease. Systematic review. MEDLINE, Embase and Cumulative Index to Nursing and Allied Health Literature (CINAHL) were searched for eligible studies from inception to February 12, 2019, supplemented by manual searches of reference lists and clinical trial registries. We planned to include prospective cohort studies and randomised controlled trials (control arms) that measured changes in lung function over time in asymptomatic adults without known respiratory disease. Two authors independently determined the eligibility of studies, extracted data and assessed the risk of bias of included studies using the modified Newcastle–Ottawa Scale. From 4385 records screened, we identified 16 cohort studies with 31 099 participants. All included studies demonstrated decline in lung function—forced expiratory volume in 1 s (FEV 1 ), forced vital capacity (FVC) and peak expiratory flow rate (PEFR) with age. In studies with longer follow-up ( years), rates of FEV 1 decline ranged from 17.7 to 46.4 mL/year (median 22.4 mL/year). Overall, men had faster absolute rates of decline (median 43.5 mL/year) compared with women (median 30.5 mL/year). Differences in relative FEV 1 change, however, were not observed between men and women. FEV 1 /FVC change was reported in only one study, declining by 0.29% per year. An age-specific analysis suggested the rate of FEV 1 function decline may accelerate with each decade of age. Lung function—FEV 1 , FVC and PEFR—decline with age in in iduals without known lung disease. The definition of chronic airway disease may need to be reconsidered to allow for normal ageing and ensure that people likely to benefit from interventions are identified rather than healthy people who may be harmed by potential overdiagnosis and overtreatment. The first step would be to apply age, sex and ethnicity-adjusted FEV 1 /FVC thresholds to the disease definition of chronic obstructive pulmonary disease. CRD42018087066.
Publisher: Informa UK Limited
Date: 2006
Publisher: Wiley
Date: 04-10-2016
Publisher: Wiley
Date: 22-12-2016
Publisher: BMJ
Date: 07-04-2011
Publisher: BMJ
Date: 02-2023
DOI: 10.1136/BMJMED-2022-000385
Abstract: To determine the effect of covid-19 vaccination, given before and after acute infection with the SARS-CoV-2 virus, or after a diagnosis of long covid, on the rates and symptoms of long covid. Systematic review. PubMed, Embase, and Cochrane covid-19 trials, and Europe PubMed Central (Europe PMC) for preprints, from 1 January 2020 to 3 August 2022. Trials, cohort studies, and case-control studies reporting on patients with long covid and symptoms of long covid, with vaccination before and after infection with the SARS-CoV-2 virus, or after a diagnosis of long covid. Risk of bias was assessed with the ROBINS-I tool. 1645 articles were screened but no randomised controlled trials were found. 16 observational studies from five countries (USA, UK, France, Italy, and the Netherlands) were identified that reported on 614 392 patients. The most common symptoms of long covid that were studied were fatigue, cough, loss of sense of smell, shortness of breath, loss of taste, headache, muscle ache, difficulty sleeping, difficulty concentrating, worry or anxiety, and memory loss or confusion. 12 studies reported data on vaccination before infection with the SARS-CoV-2 virus, and 10 showed a significant reduction in the incidence of long covid: the odds ratio of developing long covid with one dose of vaccine ranged from 0.22 to 1.03 with two doses, odds ratios were 0.25-1 with three doses, 0.16 and with any dose, 0.48-1.01. Five studies reported on vaccination after infection, with odds ratios of 0.38-0.91. The high heterogeneity between studies precluded any meaningful meta-analysis. The studies failed to adjust for potential confounders, such as other protective behaviours and missing data, thus increasing the risk of bias and decreasing the certainty of evidence to low. Current studies suggest that covid-19 vaccines might have protective and therapeutic effects on long covid. More robust comparative observational studies and trials are needed, however, to clearly determine the effectiveness of vaccines in preventing and treating long covid. Open Science Framework osf.io/e8jdy .
Publisher: American College of Physicians
Date: 20-06-2017
DOI: 10.7326/M17-0046
Publisher: Oxford University Press (OUP)
Date: 24-10-2020
Abstract: Reference intervals are an important aid in medical practice as they provide clinicians a guide as to whether a patient is healthy or diseased. Outlier results in population studies are removed by any of a variety of statistical measures. We have compared several methods of outlier removal and applied them to a large body of analytes from a large population of healthy persons. We used the outlier exclusion criteria of Reed-Dixon and Tukey and calculated reference intervals using nonparametric and Harrell-Davis statistical methods and applied them to a total of 36 different analytes. Nine of 36 analytes had a greater than 20% difference in the upper reference limit, and for some the difference was 100% or more. For some analytes, great importance is attached to the reference interval. We have shown that different statistical methods for outlier removal can cause large changes to reported reference intervals. So that population studies can be readily compared, common statistical methods should be used for outlier removal.
Publisher: Elsevier BV
Date: 05-2020
Publisher: American Medical Association (AMA)
Date: 25-09-1991
Publisher: Springer Science and Business Media LLC
Date: 25-05-2023
Publisher: Springer Science and Business Media LLC
Date: 28-09-2020
DOI: 10.1007/S41114-020-00026-9
Abstract: We present our current best estimate of the plausible observing scenarios for the Advanced LIGO, Advanced Virgo and KAGRA gravitational-wave detectors over the next several years, with the intention of providing information to facilitate planning for multi-messenger astronomy with gravitational waves. We estimate the sensitivity of the network to transient gravitational-wave signals for the third (O3), fourth (O4) and fifth observing (O5) runs, including the planned upgrades of the Advanced LIGO and Advanced Virgo detectors. We study the capability of the network to determine the sky location of the source for gravitational-wave signals from the inspiral of binary systems of compact objects, that is binary neutron star, neutron star–black hole, and binary black hole systems. The ability to localize the sources is given as a sky-area probability, luminosity distance, and comoving volume. The median sky localization area (90% credible region) is expected to be a few hundreds of square degrees for all types of binary systems during O3 with the Advanced LIGO and Virgo (HLV) network. The median sky localization area will improve to a few tens of square degrees during O4 with the Advanced LIGO, Virgo, and KAGRA (HLVK) network. During O3, the median localization volume (90% credible region) is expected to be on the order of $$10^{5}, 10^{6}, 10^{7}\\mathrm {\\ Mpc}^3$$ 10 5 , 10 6 , 10 7 Mpc 3 for binary neutron star, neutron star–black hole, and binary black hole systems, respectively. The localization volume in O4 is expected to be about a factor two smaller than in O3. We predict a detection count of $$1^{+12}_{-1}$$ 1 - 1 + 12 ( $$10^{+52}_{-10}$$ 10 - 10 + 52 ) for binary neutron star mergers, of $$0^{+19}_{-0}$$ 0 - 0 + 19 ( $$1^{+91}_{-1}$$ 1 - 1 + 91 ) for neutron star–black hole mergers, and $$17^{+22}_{-11}$$ 17 - 11 + 22 ( $$79^{+89}_{-44}$$ 79 - 44 + 89 ) for binary black hole mergers in a one-calendar-year observing run of the HLV network during O3 (HLVK network during O4). We evaluate sensitivity and localization expectations for unmodeled signal searches, including the search for intermediate mass black hole binary mergers.
Publisher: BMJ
Date: 12-2021
DOI: 10.1136/BMJOPEN-2021-053377
Abstract: To investigate differences between target and actual s le sizes, and what study characteristics were associated with s le sizes. Observational study. The large trial registries of clinicaltrials.gov (starting in 1999) and ANZCTR (starting in 2005) through to 2021. Over 280 000 interventional studies excluding studies that were withheld, terminated for safety reasons or were expanded access. The actual and target s le sizes, and the within-study ratio of the actual to target s le size. Most studies were small: the median actual s le sizes in the two databases were 60 and 52. There was a decrease over time in the target s le size of 9%–10% per 5 years, and a larger decrease of 18%–21% per 5 years for the actual s le size. The actual-to-target s le size ratio was 4.1% lower per 5 years, meaning more studies (on average) failed to hit their target s le size. Registered studies are more often under-recruited than over-recruited and worryingly both target and actual s le sizes appear to have decreased over time, as has the within-study gap between the target and actual s le size. Declining s le sizes and ongoing concerns about underpowered studies mean more research is needed into barriers and facilitators for improving recruitment and accessing data.
Publisher: BMJ
Date: 09-01-2013
DOI: 10.1136/BMJ.F105
Publisher: BMJ
Date: 15-06-2012
DOI: 10.1136/BMJ.E3863
Publisher: BMJ
Date: 17-03-2005
Publisher: Royal College of General Practitioners
Date: 09-2011
Publisher: National Institute for Health and Care Research
Date: 02-2014
DOI: 10.3310/HTA18140
Publisher: Springer Science and Business Media LLC
Date: 22-09-2020
Publisher: Wiley
Date: 03-02-2019
DOI: 10.1111/HEX.12871
Publisher: IOP Publishing
Date: 12-04-2017
Publisher: Springer Science and Business Media LLC
Date: 23-04-2014
Publisher: BMJ
Date: 10-10-2013
DOI: 10.1136/BMJ.F5806
Publisher: Wiley
Date: 17-01-2011
DOI: 10.1111/J.1398-9995.2010.02530.X
Abstract: This is the third and last article in the series about the Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach to grading the quality of evidence and the strength of recommendations in clinical practice guidelines and its application in the field of allergy. We describe the factors that influence the strength of recommendations about the use of diagnostic, preventive and therapeutic interventions: the balance of desirable and undesirable consequences, the quality of a body of evidence related to a decision, patients' values and preferences, and considerations of resource use. We provide ex les from two recently developed guidelines in the field of allergy that applied the GRADE approach. The main advantages of this approach are the focus on patient important outcomes, explicit consideration of patients' values and preferences, the systematic approach to collecting the evidence, the clear separation of the concepts of quality of evidence and strength of recommendations, and transparent reporting of the decision process. The focus on transparency facilitates understanding and implementation and should empower patients, clinicians and other health care professionals to make informed choices.
Publisher: BMJ
Date: 30-08-2020
DOI: 10.1136/BMJEBM-2019-111220
Abstract: Shared decision-making (SDM) has emerged as a key skill to assist clinicians in applying evidence-based practice (EBP). We aimed to develop and pilot a new approach to teaching EBP, which focuses on teaching knowledge and skills about SDM and pre-appraised evidence. We designed a half-day workshop, informed by an international consensus on EBP core competencies and invited practicing clinicians to participate. Skills in SDM and communicating evidence were assessed by audio-recording consultations between clinicians and standardised patients (immediately pre-workshop and post-workshop). These were rated by two independent assessors using the OPTION (Observing Patient Involvement, 0 to 100 points) and ACEPP (Assessing Communication about Evidence and Patient Preferences, 0 to 5 points) tools. Participants also completed a feedback questionnaire (9 Likert scale and four open-ended questions). Fourteen clinicians participated. Skills in SDM and communicating research evidence improved from pre-workshop to post-workshop (mean increase in OPTION score=5.5, 95% CI 1.0 to 9.9 increase in ACEPP score=0.5, 95% CI 0.02 to 1.06). Participant feedback was positive, with most indicating ‘agree’ or ‘strongly agree’ to the questions. A contemporary approach to teaching clinicians EBP, with a focus on SDM and pre-appraised evidence, was feasible, perceived as useful, and showed modest improvements in skills. Results should be interpreted cautiously because of the small study size and pre-post design.
Publisher: Elsevier BV
Date: 10-1999
DOI: 10.1016/S0140-6736(98)10063-6
Abstract: Bed rest is not only used in the management of patients who are not able to mobilise, but is also prescribed as a treatment for a large number of medical conditions, a procedure that has been challenged. We searched the literature for evidence of benefit or harm of bed rest for any condition. We systematically searched MEDLINE and the Cochrane library, and retrieved reports on randomised controlled trials of bed rest versus early mobilisation for any medical condition, including medical procedures. 39 trials of bed rest for 15 different conditions (total patients 5777) were found. In 24 trials investigating bed rest following a medical procedure, no outcomes improved significantly and eight worsened significantly in some procedures (lumbar puncture, spinal anaesthesia, radiculography, and cardiac catheterisation). In 15 trials investigating bed rest as a primary treatment, no outcomes improved significantly and nine worsened significantly for some conditions (acute low back pain, labour, proteinuric hypertension during pregnancy, myocardial infarction, and acute infectious hepatitis). We should not assume any efficacy for bed rest. Further studies need to be done to establish evidence for the benefit or harm of bed rest as a treatment.
Publisher: Elsevier BV
Date: 04-2002
Publisher: BMJ
Date: 10-09-2013
DOI: 10.1136/BMJ.F3755
Publisher: Elsevier BV
Date: 07-2017
Publisher: IOP Publishing
Date: 16-01-2020
Abstract: GW170817 is the very first observation of gravitational waves originating from the coalescence of two compact objects in the mass range of neutron stars, accompanied by electromagnetic counterparts, and offers an opportunity to directly probe the internal structure of neutron stars. We perform Bayesian model selection on a wide range of theoretical predictions for the neutron star equation of state. For the binary neutron star hypothesis, we find that we cannot rule out the majority of theoretical models considered. In addition, the gravitational-wave data alone does not rule out the possibility that one or both objects were low-mass black holes. We discuss the possible outcomes in the case of a binary neutron star merger, finding that all scenarios from prompt collapse to long-lived or even stable remnants are possible. For long-lived remnants, we place an upper limit of 1.9 kHz on the rotation rate. If a black hole was formed any time after merger and the coalescing stars were slowly rotating, then the maximum baryonic mass of non-rotating neutron stars is at most , and three equations of state considered here can be ruled out. We obtain a tighter limit of for the case that the merger results in a hypermassive neutron star.
Publisher: SAGE Publications
Date: 02-1992
DOI: 10.1177/0272989X9201200107
Abstract: Among those decisions that may be made by a patient in response to an illness, the authors single out a certain class. contingent investment decisions. They are characterized by the patient's committing him- or herself, on the basis of prognostic counseling, to a certain action or non-action that he or she may regret in retrospect Ex les show that, when assessing utilities, the decision analyst runs a risk of handling such investment decisions incorrectly, unless they are made explicit and incorporated into the medical decision process. The anomaly is explained as a violation of the structural rules for decision trees and is also interpreted in terms of "the price of prognostic ignorance," a quantity closely related to the expected utility value of perfect information. Key words: decision theory physician-patient relations, patient compliance prognosis risk-taking, quality of life utility theory patients' decisions, decision trees. (Med Decis Making 1992 :39-43)
Publisher: Elsevier BV
Date: 03-2014
Publisher: American Astronomical Society
Date: 04-09-2019
Publisher: Elsevier BV
Date: 09-2011
DOI: 10.1016/J.YPMED.2011.05.011
Abstract: The history of breast cancer screening is littered with controversy. With 10 trials spanning 4 decades, we have a substantial body of evidence, but with different aims and flaws. Combined analysis of the intention-to-treat results gives an overall relative reduction in breast cancer mortality of 19% (95% CI 12%-26%), which, if adjusted for non-attendance gives an approximate 25% relative reduction for those who attend screening. However, given that 4% of all-cause mortality is due to breast cancer deaths, this translates into a less than 1% reduction in all-cause mortality. An emerging issue in interpretation is the improvements in treatment since these trials recruited women. Modern systemic therapy would have improved survival (models suggest between 12% and 21%) in both screened and non-screened groups, which would result in a lesser difference in absolute risk reduction from screening but probably a similar, or slightly smaller, relative risk reduction. However benefits and harms, particularly over-diagnosis, need to balanced and differ by age-groups. The informed views of recipients of screening are needed to guide current and future policy on screening.
Publisher: Wiley
Date: 12-02-2010
DOI: 10.1111/J.1753-6405.1995.TB00292.X
Abstract: Australian guidelines for colorectal cancer screening for average-risk populations vary from recommendations for annual screening by faecal occult blood testing for those over 40 years to recommendations that screening may be appropriate if requested by an informed patient aged 50 to 75 years. There are five large screening trials, of which three have published mortality data. A meta-analysis of the mortality data suggests a 19 per cent reduction in colorectal cancer mortality (95 per cent confidence intervals 0.68 to 0.96) with Hemoccult screening. Because of the width of the confidence interval, decisions about the magnitude of the effect of screening should await further trial results, which should be available in the next few years. In the interim, we should examine issues of harm and costs in Australia. For ex le, in the major trials, over 80 per cent of positive results have been falsely positive and have required invasive investigation. Estimates of the cost-effectiveness of screening for the Australian health system are not yet available and are essential. If the benefits of screening outweight the harms and costs, a successful screening program would require provision of screening infrastructure and appropriate information to target populations, quality control for screening tests and investigations, recall mechanisms to ensure appropriate follow-up of persons with positive results and the active participation of the Australian public and health practitioners.
Publisher: American College of Physicians
Date: 07-01-2003
DOI: 10.7326/0003-4819-138-1-200301070-00012-W1
Abstract: The quality of reporting of studies of diagnostic accuracy is less than optimal. Complete and accurate reporting is necessary to enable readers to assess the potential for bias in the study and to evaluate the generalizability of the results. A group of scientists and editors has developed the STARD (Standards for Reporting of Diagnostic Accuracy) statement to improve the reporting the quality of reporting of studies of diagnostic accuracy. The statement consists of a checklist of 25 items and flow diagram that authors can use to ensure that all relevant information is present. This explanatory document aims to facilitate the use, understanding, and dissemination of the checklist. The document contains a clarification of the meaning, rationale, and optimal use of each item on the checklist, as well as a short summary of the available evidence on bias and applicability. The STARD statement, checklist, flowchart, and this explanation and elaboration document should be useful resources to improve reporting of diagnostic accuracy studies. Complete and informative reporting can only lead to better decisions in health care.
Publisher: Ovid Technologies (Wolters Kluwer Health)
Date: 09-2015
Publisher: Springer Science and Business Media LLC
Date: 09-07-2014
Publisher: BMJ
Date: 09-1999
Abstract: To investigate the psychometric properties of a cardiovascular extension of an existing utility-based quality of life questionnaire (Health Measurement Questionnaire). The new instrument has been named the Utility Based Quality of life--Heart questionnaire, or UBQ-H. Explored the test-retest reliability, construct validity, and responsiveness of the UBQ-H. A s le of 322 patients attending cardiac outpatient clinics were recruited from two large metropolitan teaching hospitals. A second s le of 1112 patients taking part in the LIPID trial was also used to investigate the validity and responsiveness of the UBQ-H. Ninety per cent of all UBQ-H questionnaires were returned, and item completion rates were high (median of less than 1% missing or N/A answers). Cronbach's alpha measure of internal consistency for the scales ranged between 0.79-0.91, and each item was also most strongly correlated with its hypothesised domain than alternative domains. The intra-class test-retest reliability of the UBQ-H scales ranged from 0.65 to 0.81 for patients with stable health. Results supported the construct validity of the UBQ-H. The UBQ-H was significantly correlated with other information on quality of life (for ex le, General Health Questionnaire) as anticipated. The instrument was able to distinguish between contrasted groups of patients (for ex le, with versus without symptoms of dyspnoea, prior myocardial infarction versus none, etc), and was responsive to changes in health associated with adverse events requiring hospitalisation. The modifications made to the Health Measurement Questionnaire has resulted in an assessment designed for cardiovascular patients that has proved to be both reliable and valid.
Publisher: Elsevier BV
Date: 12-2011
DOI: 10.1016/J.JCLINEPI.2011.06.004
Abstract: The most common reason for rating up the quality of evidence is a large effect. GRADE suggests considering rating up quality of evidence one level when methodologically rigorous observational studies show at least a two-fold reduction or increase in risk, and rating up two levels for at least a five-fold reduction or increase in risk. Systematic review authors and guideline developers may also consider rating up quality of evidence when a dose-response gradient is present, and when all plausible confounders or biases would decrease an apparent treatment effect, or would create a spurious effect when results suggest no effect. Other considerations include the rapidity of the response, the underlying trajectory of the condition, and indirect evidence.
Publisher: Springer Science and Business Media LLC
Date: 10-03-2015
Publisher: Public Library of Science (PLoS)
Date: 13-01-2017
Publisher: BMJ
Date: 28-10-2015
DOI: 10.1136/BMJ.H5527
Publisher: Massachusetts Medical Society
Date: 03-08-2000
Publisher: BMJ
Date: 09-2018
DOI: 10.1136/BMJOPEN-2017-020584
Abstract: To assess evidence for ‘legacy’ (post-trial) effects on cardiovascular disease (CVD) mortality and all-cause mortality among adult participants of placebo-controlled randomised controlled trials (RCTs) of statins. Meta-analysis of aggregate data. Placebo-controlled statin RCTS for primary and secondary CVD prevention. Data sources: PubMed, Embase from inception and forward citations of Cholesterol Treatment Trialists’ Collaborators RCTs to 16 June 2016. Study selection: Two independent reviewers identified all statin RCT follow-up reports including ≥1000 participants, and cardiovascular and all-cause mortality. Data extraction and synthesis: Two independent reviewers extracted data in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. Main outcomes: Post-trial CVD and all-cause mortality. We included eight trials, with mean post-trial follow-up ranging from 1.6 to 15.1 years, and including 13 781 post-trial deaths (6685 CVD). Direct effects of statins within trials were greater than legacy effects post-trials. The pooled data from all eight studies showed no evidence overall of legacy effects on CVD mortality, but some evidence of legacy effects on all-cause mortality (p=0.01). Exploratory subgroup analysis found possible differences in legacy effect for primary prevention trials compared with secondary prevention trials for both CVD mortality (p=0.15) and all-cause mortality (p=0.02). Pooled post-trial HR for the three primary prevention studies demonstrated possible post-trial legacy effects on CVD mortality (HR=0.87 95% CI 0.79 to 0.95) and on all-cause mortality (HR=0.90 95% CI 0.85 to 0.96). Possible post-trial statin legacy effects on all-cause mortality appear to be driven by the primary prevention studies. Although these relative benefits were smaller than those observed within the trial, the absolute benefits may be similar for the two time periods. Analysis of in idual patient data from follow-up studies after placebo-controlled statin RCTs in lower-risk populations may provide more definitive evidence on whether early treatment of subclinical atherosclerosis is likely to be beneficial.
Publisher: American Physical Society (APS)
Date: 04-09-2019
Publisher: Informa UK Limited
Date: 2009
DOI: 10.1080/01421590802572791
Abstract: It is recognized that clinicians need training in evidence-based medicine (EBM), however there is considerable variation in the content and methods of the EBM curriculum in UK medical schools. To determine current practice and variation in EBM undergraduate teaching in UK medical schools and inform the strategy of medical schools and the National Knowledge Service. We contacted all 32 medical schools in the UK and requested that the person primarily responsible for EBM undergraduate teaching complete a short online survey and provide their EBM curriculum. The survey was completed by representatives from 20 (63%) medical schools and curriculum details were received from 5 (16%). There is considerable variation in the methods and content of the EBM curriculum. Although the majority of schools teach core EBM topics, relatively few allow students to practice the skills or assess such skills. EBM teaching is restricted by lack of curriculum time, trained tutors and teaching materials. Key elements to progress include the integration of EBM with clinical specialties, tutor training and the availability of high-quality teaching resources. The development of a national undergraduate EBM curriculum may help in promoting progress in EBM teaching and assessment in UK medical schools.
Publisher: Wiley
Date: 12-2009
Publisher: Center for Open Science
Date: 12-10-2019
Abstract: Objectives:Patients do better in research-intense environments. The importance of research is reflected in the accreditation requirements of Australian clinical specialist colleges. The nature of college-mandated research training has not been systematically explored. We examined the intended research curricula of Australian trainee doctors described by specialist colleges, their constructive alignment, and the nature of scholarly project requirements.Design:We undertook content analysis of publicly available documents to characterise college research training curricula. Setting: We reviewed all publicly accessible information from the websites of Australian specialist colleges and their subspecialty isions. We retrieved curricula, handbooks, and assessment-related documents.Participants: Fifty-eight Australian specialist colleges and their subspecialty isions.Primary and secondary outcome measures: Two reviewers extracted and coded research-related activities as learning outcomes, activities, or assessments, by research stage (using, participating in or leading research) and competency based on Bloom’s Taxonomy (remembering, understanding, applying, analysing, evaluating, creating). We coded learning and assessment activities by type (e.g. formal research training, publication) and whether it was linked to a scholarly project. Requirements related to project supervisors’ research experience were noted.Results: Fifty-five of 58 Australian college subspecialty isions had a scholarly project requirement. Only 11 required formal research training two required an experienced research supervisor. Colleges emphasised a role for trainees in leading research in their learning outcomes and assessments, but not learning activities. Less emphasis was placed on using research, and almost no emphasis on participation. Most learning activities and assessments mapped to the ‘creating’ domain of Bloom’s Taxonomy, whereas most learning outcomes mapped to the ‘evaluating’ domain. Overall, most research learning and assessment activities were related to leading a scholarly project.Conclusions: Australian specialist college curricula appear to emphasise a role for trainees in leading research and producing research deliverables, but do not mandate formal research training and supervision by experienced researchers.
Publisher: Wiley
Date: 12-2016
DOI: 10.1111/FCT.12278
Publisher: Cold Spring Harbor Laboratory
Date: 22-06-2022
DOI: 10.1101/2022.06.20.22276621
Abstract: The impact of COVID-19 vaccination on preventing or treating long COVID is unclear. We aim to assess the impact of COVID vaccinations administered (i) before and (ii) after acute COVID-19, including vaccination after long COVID diagnosis, on the rates or symptoms of long COVID. We searched PubMed, Embase, Cochrane COVID-19 trials, and Europe PMC for preprints from 1 Jan 2020 to 16 Feb 2022. We included trials, cohort, and case control studies reporting on long COVID cases and symptoms with vaccine administration both before and after COVID-19 diagnosis as well as after long COVID diagnosis. Risk of bias was assessed using ROBINS-I. We screened 356 articles and found no trials, but 6 observational studies from 3 countries (USA, UK, France) that reported on 442,601 patients. The most common long COVID symptoms studied include fatigue, cough, loss of smell, shortness of breath, loss of taste, headache, muscle ache, trouble sleeping, difficulty concentrating, worry or anxiety, and memory loss or confusion. Four studies reported data on vaccination before SARS-CoV-2 infection, of which three showed statistically significant reduction in long COVID: the odds ratio of developing long COVID with one dose of vaccine ranged between OR 0.22 to 1.03 with two doses OR 0.51 to 1 and with any dose OR 0.85 to 1.01. Three studies reported on post-infection vaccination with odds ratios between 0.38 to 0.91. The high heterogeneity between studies precluded any meaningful meta-analysis. Studies failed to adjust for potential confounders such as other protective behaviours, and missing data, thus increasing the risk of bias, and decreasing the certainty of evidence to low. Current studies suggest that COVID-19 vaccinations may have protective and therapeutic effects on long COVID. However, more robust comparative observational studies and trials are urgently needed to clearly determine effectiveness of vaccines in prevention and treatment of long COVID.
Publisher: Royal College of General Practitioners
Date: 27-01-2020
Abstract: Approximately 15% of community-prescribed antibiotics are used in treating urinary tract infections (UTIs). Increase in antibiotic resistance necessitates considering alternatives. To assess the impact of increased fluid intake in in iduals at risk for UTIs, for impact on UTI recurrence (primary outcome), antimicrobial use, and UTI symptoms (secondary outcomes). A systematic review. The authors searched PubMed, Cochrane CENTRAL, EMBASE, two trial registries, and conducted forward and backward citation searches of included studies in January 2019. Randomised controlled trials of in iduals at risk for UTIs were included comparisons with antimicrobials were excluded. Different time-points (≤6 months and 12 months) were compared for the primary outcome. Risk of bias was assessed using Cochrane Risk of Bias tool. Meta-analyses were undertaken where ≥3 studies reported the same outcome. Eight studies were included seven were meta-analysed. There was a statistically non-significant reduction in the number of patients with any UTI recurrence in the increased fluid intake group compared with control after 12 months (odds ratio [OR] 0.39, 95% confidence interval [CI] = 0.15 to 1.03, P = 0.06) reduction was significant at ≤6 months (OR 0.13, 95% CI = 0.07 to 0.25, P .001). Excluding studies with low volume of fluid ( ml) significantly favoured increased fluid intake (OR 0.25, 95% CI = 0.11 to 0.59, P = 0.001). Increased fluid intake reduced the overall rate of all recurrent UTIs (rate ratio [RR] 0.46, 95% CI = 0.40 to 0.54, P .001) there was no difference in antimicrobial use (OR 0.52, 95% CI = 0.25 to 1.07, P = 0.08). Paucity of data precluded meta-analysing symptoms. Given the minimal potential for harm, patients with recurrent UTIs could be advised to drink more fluids to reduce recurrent UTIs. Further research is warranted to establish the optimal volume and type of increased fluid.
Publisher: CSIRO Publishing
Date: 2020
DOI: 10.1071/PYV26N4ABS
Publisher: American Board of Family Medicine (ABFM)
Date: 2021
Publisher: BMJ
Date: 06-2019
Publisher: Wiley
Date: 13-10-2015
Publisher: Wiley
Date: 21-06-2023
Abstract: Clinical decision aids (CDAs) can help clinicians with patient risk assessment. However, there is little data on CDA calculation, interpretation and documentation in real‐world ED settings. The ABCD2 score (range 0–7) is a CDA used for patients with transient ischaemic attack (TIA) and assesses risk of stroke, with a score of 0–3 being low risk. The aim of this study was to describe ABCD2 score documentation in patients with an ED diagnosis of TIA. Retrospective observational study of patients with a working diagnosis of a TIA in two Australian EDs. Data were gathered using routinely collected data from health informatics sources and medical records reviewed by a trained data abstractor. ABCD2 scores were calculated and compared with what was documented by the treating clinician. Data were presented using descriptive analysis and scatter plots. Among the 367 patients with an ED diagnosis of TIA, clinicians documented an ABCD2 score in 45% (95% CI 40–50%, n = 165). Overall, there was very good agreement between calculated and documented scores (Cohen's kappa 0.90). The mean documented and calculated ABCD2 score were similar (3.8, SD = 1.5, n = 165 vs 3.7, SD = 1.8, n = 367). Documented scores on the threshold of low and high risk were more likely to be discordant with calculated scores. The ABCD2 score was documented in less than half of eligible patients. When documented, clinicians were generally accurate with their calculation and application of the ABCD2. No independent predictors of ABCD2 documentation were identified.
Publisher: John Wiley & Sons, Ltd
Date: 15-08-2012
Publisher: Oxford University Press (OUP)
Date: 04-2011
DOI: 10.1373/CLINCHEM.2010.157586
Abstract: The measurement of hemoglobin A1c (Hb A1c) is employed in monitoring of patients with diabetes. Use of point-of-care testing (POCT) for Hb A1c results at the time of the patient consultation potentially provides an opportunity for greater interaction between patient and caregiver, and more effective care. To perform a systematic review of current trials to determine whether POCT for Hb A1c, compared with conventional laboratory testing, improves outcomes for patients with diabetes. Searches were undertaken on 4 electronic databases and bibliographies from, and hand searches of, relevant journal papers. Only randomized controlled trials were included. The primary outcome measures were change in Hb A1c and treatment intensification. Metaanalyses were performed on the data obtained. Seven trials were found. There was a nonsignificant reduction of 0.09% (95% CI −0.21 to 0.02) in the Hb A1c in the POCT compared to the standard group. Although data were collected on the change in proportion of patients reaching a target Hb A1c of & .0%, treatment intensification and heterogeneity in the populations studied and how measures were reported precluded pooling of data and metaanalysis. Positive patient satisfaction was also reported in the studies, as well as limited assessments of costs. There is an absence of evidence in clinical trial data to date for the effectiveness of POCT for Hb A1c in the management of diabetes. In future studies attention to trial design is needed to ensure appropriate selection and stratification of patients, collection of outcome measures, and action taken upon Hb A1c results when produced.
Publisher: Wiley
Date: 22-07-2026
Publisher: American Astronomical Society
Date: 26-08-2020
Publisher: Georg Thieme Verlag KG
Date: 10-1986
Abstract: The development of investigative strategies by decision analysis has been achieved by explicitly drawing the decision tree, either by hand or on computer. This paper discusses the feasibility of automatically generating and analysing decision trees from a description of the investigations and the treatment problem. The investigation of cholestatic jaundice is used to illustrate the technique. Methods to decrease the number of calculations required are presented. It is shown that this method makes practical the simultaneous study of at least half a dozen investigations. However, some new problems arise due to the possible complexity of the resulting optimal strategy. If protocol errors and delays due to testing are considered, simpler strategies become desirable. Generation and assessment of these simpler strategies are discussed with ex les.
Publisher: BMJ
Date: 07-2023
Publisher: Wiley
Date: 2008
Publisher: American Physical Society (APS)
Date: 18-10-2019
Publisher: Oxford University Press (OUP)
Date: 1998
Publisher: Elsevier BV
Date: 02-2013
DOI: 10.1016/J.JCLINEPI.2012.01.012
Abstract: Summary of Findings (SoF) tables present, for each of the seven (or fewer) most important outcomes, the following: the number of studies and number of participants the confidence in effect estimates (quality of evidence) and the best estimates of relative and absolute effects. Potentially challenging choices in preparing SoF table include using direct evidence (which may have very few events) or indirect evidence (from a surrogate) as the best evidence for a treatment effect. If a surrogate is chosen, it must be labeled as substituting for the corresponding patient-important outcome. Another such choice is presenting evidence from low-quality randomized trials or high-quality observational studies. When in doubt, a reasonable approach is to present both sets of evidence if the two bodies of evidence have similar quality but discrepant results, one would rate down further for inconsistency. For binary outcomes, relative risks (RRs) are the preferred measure of relative effect and, in most instances, are applied to the baseline or control group risks to generate absolute risks. Ideally, the baseline risks come from observational studies including representative patients and identifying easily measured prognostic factors that define groups at differing risk. In the absence of such studies, relevant randomized trials provide estimates of baseline risk. When confidence intervals (CIs) around the relative effect include no difference, one may simply state in the absolute risk column that results fail to show a difference, omit the point estimate and report only the CIs, or add a comment emphasizing the uncertainty associated with the point estimate.
Publisher: AMPCo
Date: 10-2001
Publisher: BMJ
Date: 05-1999
Publisher: American Astronomical Society
Date: 18-12-2017
Publisher: Springer Science and Business Media LLC
Date: 16-10-2017
DOI: 10.1038/NATURE24471
Abstract: On 17 August 2017, the Advanced LIGO and Virgo detectors observed the gravitational-wave event GW170817-a strong signal from the merger of a binary neutron-star system. Less than two seconds after the merger, a γ-ray burst (GRB 170817A) was detected within a region of the sky consistent with the LIGO-Virgo-derived location of the gravitational-wave source. This sky region was subsequently observed by optical astronomy facilities, resulting in the identification of an optical transient signal within about ten arcseconds of the galaxy NGC 4993. This detection of GW170817 in both gravitational waves and electromagnetic waves represents the first 'multi-messenger' astronomical observation. Such observations enable GW170817 to be used as a 'standard siren' (meaning that the absolute distance to the source can be determined directly from the gravitational-wave measurements) to measure the Hubble constant. This quantity represents the local expansion rate of the Universe, sets the overall scale of the Universe and is of fundamental importance to cosmology. Here we report a measurement of the Hubble constant that combines the distance to the source inferred purely from the gravitational-wave signal with the recession velocity inferred from measurements of the redshift using the electromagnetic data. In contrast to previous measurements, ours does not require the use of a cosmic 'distance ladder': the gravitational-wave analysis can be used to estimate the luminosity distance out to cosmological scales directly, without the use of intermediate astronomical distance measurements. We determine the Hubble constant to be about 70 kilometres per second per megaparsec. This value is consistent with existing measurements, while being completely independent of them. Additional standard siren measurements from future gravitational-wave sources will enable the Hubble constant to be constrained to high precision.
Publisher: AMPCo
Date: 10-04-2020
DOI: 10.5694/MJA2.50578
Publisher: American Diabetes Association
Date: 04-2008
DOI: 10.2337/DC07-1391
Abstract: OBJECTIVE—To investigate whether self-rated health profiles compiled using the EuroQol group’s visual analog scale (EQ VAS) are independent predictors of vascular events and major complications in people with type 2 diabetes after controlling for standard clinical risk factors. RESEARCH DESIGN AND METHODS—The study is based on 7,348 in iduals with a mean follow-up of 2.4 years after completing the EQ-5D questionnaire. We used Cox proportional hazards modeling to estimate hazard ratios associated with EQ VAS scores after controlling for baseline covariates: age, sex, smoking status, diabetes duration, A1C, systolic blood pressure, BMI, plasma lipids, and prior clinical history. RESULTS—A 10-point higher EQ VAS score was associated with a 6% (95% CI 1–11) lower risk of vascular events and a 22% (95% CI 15–28) lower risk of diabetesc complications. CONCLUSIONS—Self-rated health profiles compiled using the EQ VAS provide valuable information on patient risk in addition to that determined from clinical risk factors alone.
Publisher: Elsevier BV
Date: 11-2005
Publisher: BMJ
Date: 20-06-2005
Publisher: AMPCo
Date: 19-12-2020
DOI: 10.5694/MJA2.50455
Publisher: BMJ
Date: 2013
Publisher: Public Library of Science (PLoS)
Date: 03-02-2017
Publisher: BMJ
Date: 26-06-2008
Publisher: AMPCo
Date: 06-2013
DOI: 10.5694/MJA12.11576
Publisher: University of Toronto Press Inc. (UTPress)
Date: 31-12-2020
Abstract: Background: Knowing the prevalence of true asymptomatic coronavirus disease 2019 (COVID-19) cases is critical for designing mitigation measures against the pandemic. We aimed to synthesize all available research on asymptomatic cases and transmission rates. Methods: We searched PubMed, Embase, Cochrane COVID-19 trials, and Europe PMC for primary studies on asymptomatic prevalence in which (1) the s le frame includes at-risk populations and (2) follow-up was sufficient to identify pre-symptomatic cases. Meta-analysis used fixed-effects and random-effects models. We assessed risk of bias by combination of questions adapted from risk of bias tools for prevalence and diagnostic accuracy studies. Results: We screened 2,454 articles and included 13 low risk-of-bias studies from seven countries that tested 21,708 at-risk people, of which 663 were positive and 111 asymptomatic. Diagnosis in all studies was confirmed using a real-time reverse transcriptase–polymerase chain reaction test. The asymptomatic proportion ranged from 4% to 41%. Meta-analysis (fixed effects) found that the proportion of asymptomatic cases was 17% (95% CI 14% to 20%) overall and higher in aged care (20% 95% CI 14% to 27%) than in non-aged care (16% 95% CI 13% to 20%). The relative risk (RR) of asymptomatic transmission was 42% lower than that for symptomatic transmission (combined RR 0.58 95% CI 0.34 to 0.99, p = 0.047). Conclusions: Our one-in-six estimate of the prevalence of asymptomatic COVID-19 cases and asymptomatic transmission rates is lower than those of many highly publicized studies but still sufficient to warrant policy attention. Further robust epidemiological evidence is urgently needed, including in subpopulations such as children, to better understand how asymptomatic cases contribute to the pandemic.
Publisher: Oxford University Press (OUP)
Date: 02-2004
Abstract: Our aim was to improve the accuracy and completeness of reporting of studies of diagnostic accuracy in order to allow readers to assess the potential for bias in a study and to evaluate the generalizability of its results. The Standards for Reporting of Diagnostic Accuracy (STARD) steering committee searched the literature to identify publications on the appropriate conduct and reporting of diagnostic studies and extracted potential items into an extensive list. Researchers, editors and members of professional organizations shortened this list during a 2-day consensus meeting with the goal of developing a checklist and a generic flow diagram for studies of diagnostic accuracy. The search for published guidelines about diagnostic research yielded 33 previously published checklists, from which we extracted a list of 75 potential items. At the consensus meeting, participants shortened the list to a 25-item checklist, by using evidence whenever available. A prototype of a flow diagram provides information about the method of recruitment of patients, the order of test execution and the numbers of patients undergoing the test under evaluation and/or the reference standard. Evaluation of research depends on complete and accurate reporting. If medical journals adopt the checklist and the flow diagram, the quality of reporting of studies of diagnostic accuracy should improve, to the advantage of clinicians, researchers, reviewers, journals and the public.
Publisher: JMIR Publications Inc.
Date: 06-2020
DOI: 10.2196/16497
Abstract: Evidence of effectiveness of mobile health (mHealth) apps as well as their usability as non-drug interventions in primary care are emerging around the globe. This study aimed to explore the feasibility of mHealth app prescription by general practitioners (GPs) and to evaluate the effectiveness of an implementation intervention to increase app prescription. A single-group, before-and-after study was conducted in Australian general practice. GPs were given prescription pads for 6 mHealth apps and reported the number of prescriptions dispensed for 4 months. After the reporting of month 2, a 2-minute video of one of the apps was randomly selected and sent to each GP. Data were collected through a prestudy questionnaire, monthly electronic reporting, and end-of-study interviews. The primary outcome was the number of app prescriptions (total, monthly, per GP, and per GP per fortnight). Secondary outcomes included confidence in prescribing apps (0-5 scale), the impact of the intervention video on subsequent prescription numbers, and acceptability of the interventions. Of 40 GPs recruited, 39 commenced, and 36 completed the study. In total, 1324 app prescriptions were dispensed over 4 months. The median number of apps prescribed per GP was 30 (range 6-111 apps). The median number of apps prescribed per GP per fortnight increased from the pre-study level of 1.7 to 4.1. Confidence about prescribing apps doubled from a mean of 2 (not so confident) to 4 (very confident). App videos did not affect subsequent prescription rates substantially. Post-study interviews revealed that the intervention was highly acceptable. mHealth app prescription in general practice is feasible, and our implementation intervention was effective in increasing app prescription. GPs need more tailored education and training on the value of mHealth apps and knowledge of prescribable apps to be able to successfully change their prescribing habits to include apps. The future of sustainable and scalable app prescription requires a trustworthy electronic app repository of prescribable mHealth apps for GPs.
Publisher: Oxford University Press (OUP)
Date: 10-2016
DOI: 10.2522/PTJ.20150668
Abstract: Exercise interventions are often incompletely described in reports of clinical trials, h ering evaluation of results and replication and implementation into practice. The aim of this study was to develop a standardized method for reporting exercise programs in clinical trials: the Consensus on Exercise Reporting Template (CERT). Using the EQUATOR Network's methodological framework, 137 exercise experts were invited to participate in a Delphi consensus study. A list of 41 items was identified from a meta-epidemiologic study of 73 systematic reviews of exercise. For each item, participants indicated agreement on an 11-point rating scale. Consensus for item inclusion was defined a priori as greater than 70% agreement of respondents rating an item 7 or above. Three sequential rounds of anonymous online questionnaires and a Delphi workshop were used. There were 57 (response rate=42%), 54 (response rate=95%), and 49 (response rate=91%) respondents to rounds 1 through 3, respectively, from 11 countries and a range of disciplines. In round 1, 2 items were excluded 24 items reached consensus for inclusion (8 items accepted in original format), and 16 items were revised in response to participant suggestions. Of 14 items in round 2, 3 were excluded, 11 reached consensus for inclusion (4 items accepted in original format), and 7 were reworded. Sixteen items were included in round 3, and all items reached greater than 70% consensus for inclusion. The views of included Delphi panelists may differ from those of experts who declined participation and may not fully represent the views of all exercise experts. The CERT, a 16-item checklist developed by an international panel of exercise experts, is designed to improve the reporting of exercise programs in all evaluative study designs and contains 7 categories: materials, provider, delivery, location, dosage, tailoring, and compliance. The CERT will encourage transparency, improve trial interpretation and replication, and facilitate implementation of effective exercise interventions into practice.
Publisher: BMJ
Date: 26-02-2013
DOI: 10.1136/BMJ.F1271
Publisher: No publisher found
Date: 2016
Publisher: Wiley
Date: 2008
Publisher: Cold Spring Harbor Laboratory
Date: 09-06-2020
DOI: 10.1101/2020.06.09.20126110
Abstract: Timely and effective contact tracing is an essential public health role to curb the transmission of COVID-19. App-based contact tracing has the potential to optimise the resources of overstretched public health departments. However, it’s efficiency is dependent on wide-spread adoption. We aimed to identify the proportion of people who had downloaded the Australian Government COVIDSafe app and examine the reasons why some did not. An online national survey with representative quotas for age and gender was conducted between May 8 and May 11 2020. Participants were excluded if they were a healthcare professional or had been tested for COVID-19. Of the 1802 potential participants contacted, 289 were excluded, 13 declined, and 1500 participated in the survey (response rate 83%). Of survey participants, 37% had downloaded the COVIDSafe app, 19% intended to, 28% refused, and 16% were undecided. Equally proportioned reasons for not downloading the app included privacy (25%) and technical concerns (24%). Other reasons included a belief that social distancing was sufficient and the app is unnecessary (16%), distrust in the Government (11%), and apathy (11%). In addition, COVIDSafe knowledge varied with confusion about its purpose and capabilities. For the COVIDSafe app to be accepted by the public and used correctly, public health messages need to address the concerns of its citizens, specifically in regards to privacy, data storage, and technical capabilities. Understanding the specific barriers preventing the uptake of tracing apps provides the opportunity to design targeted communication strategies aimed at strengthening public health initiatives such as download and correct use.
Publisher: Georg Thieme Verlag KG
Date: 07-1984
Abstract: Much attention has been given to the deductive methods appropriate for medical diagnosis, but much less has been paid to the data structures required to support them. In this paper we apply the linguistically oriented information analysis technique NIAM to the problem and demonstrate how such a conceptual approach could be used for history taking, knowledge acquisition, and diagnosis. We outline the underlying structures in which medical knowledge is traditionally expressed, and use cardiorespiratory disorders as ex les. The acyclic network structure of diagnostic categories suggested by this analysis is compared to traditional hierarchical approaches.
Publisher: Springer Science and Business Media LLC
Date: 26-04-2017
Publisher: Wiley
Date: 2008
Publisher: BMJ
Date: 03-01-2004
Publisher: Wiley
Date: 17-07-2022
DOI: 10.5694/MJA2.51655
Publisher: Cold Spring Harbor Laboratory
Date: 24-07-2020
DOI: 10.1101/2020.07.22.20160432
Abstract: To compare the effectiveness of hand hygiene using alcohol-based hand sanitiser to soap and water for preventing the transmission of acute respiratory infections (ARIs), and assess the relationship between the dose of hand hygiene and the number of ARI, influenza-like illness (ILI), or influenza events. Systematic review of randomised trials that compared a community-based hand hygiene intervention (soap and water, or sanitiser) with a control, or trials that compared sanitiser with soap and water, and measured outcomes of ARI, ILI, or laboratory-confirmed influenza or related consequences. Searches were conducted in CENTRAL, PubMed, Embase, CINAHL and trial registries (April 2020) and data extraction completed by independent pairs of reviewers. Eighteen trials were included. When meta-analysed, three trials of soap and water versus control found a non-significant increase in ARI events (Risk Ratio (RR) 1.23, 95%CI 0.78-1.93) six trials of sanitiser versus control found a significant reduction in ARI events (RR 0.80, 95%CI 0.71-0.89). When hand hygiene dose was plotted against ARI relative risk, no clear dose-response relationship was observable. Four trials were head-to-head comparisons of sanitiser and soap and water but too heterogeneous to pool: two found a significantly greater reduction in the sanitiser group compared to the soap group two found no significant difference between the intervention arms. Adequately performed hand hygiene, with either soap or sanitiser, reduces the risk of ARI virus transmission, however direct and indirect evidence suggest sanitiser might be more effective in practice.
Publisher: Wiley
Date: 2008
Publisher: The Royal Australian College of General Practitioners
Date: 02-2022
Publisher: Oxford University Press (OUP)
Date: 11-09-2009
Abstract: Using accurate and easy to use rapid antigen detection tests (RADTs) to identify group A beta-haemolytic Streptococci (GABHS) sore throat infections could reduce unnecessary antibiotic prescribing and antimicrobial resistance. Although there is no international consensus on the use of RADTs, these kits have been widely adopted in Finland, France and the USA. Yet in the UK, the Clinical Knowledge Summaries, that provide the main online guidance for GPs, discourage RADTs use, citing their poor sensitivity and inability to impact on prescribing decisions in acute sore throat infections. The purpose of this study was to evaluate the ease of use and in vitro accuracy (sensitivity and specificity) of the five most commonly used RADTs in Europe (OSOM Ultra, Quickvue Dipstick, Streptatest, Clearview Exact Strep A and IMI Test Pack). To ensure the RADTs were evaluated objectively, a standardized in vitro method using known concentrations of GABHS was used to remove the inherent biases associated with clinical studies. The IMI Test Pack was the easiest RADT to use overall. The ability to detect all positive GABHS (sensitivity) varied considerably between kits from 95% [95% confidence interval (CI): 88-98%], for the IMI Test Pack and OSOM, to 62% (95% CI: 51-72%) for Clearview, at the highest GABHS concentration. None of the RADTs gave any false-positive results with commensal flora-they were 100% specific. The IMI Test Pack is most suitable for use in primary care, as it had high sensitivity, high specificity and was easy to use.
Publisher: BMJ
Date: 16-03-2015
DOI: 10.1136/BMJ.H870
Publisher: Springer US
Date: 2006
Publisher: American Society of Clinical Oncology (ASCO)
Date: 1989
Abstract: The use of adjuvant chemotherapy for postmenopausal patients with early breast cancer remains controversial because the potential benefits in terms of prolongation of disease-free survival (DFS) and overall survival (OS) must be balanced against the toxicity of treatment. Following mastectomy, 463 evaluable postmenopausal women with node-positive breast cancer were randomized to receive either chemoendocrine therapy for 1 year, or endocrine therapy alone for 1 year, or no adjuvant therapy (Ludwig Trial III). At 7-years median follow-up, OS was longer for the chemoendocrine-treated patients compared with controls (P = .04) and compared with the adjuvant endocrine therapy-alone group (P = .08). In order to balance this therapeutic advantage against the toxic effects of treatment, OS time was ided into time with toxicity (TOX), time without symptoms and toxicity (TWiST), and time after systemic relapse (REL). TOX and REL were weighted by coefficients of utility relative to TWiST and the results added to give a period of quality-adjusted survival (Q-TWiST). Benefits measured by Q-TWiST generally favored chemoendocrine therapy. For ex le, if TOX and REL were both given utility coefficients of 0.5 relative to 1.0 for TWiST, then by 7 years the average Q-TWiST for chemoendocrine therapy was 6.7 months longer than for no-adjuvant therapy (P = .05) and 4.1 months longer than for endocrine therapy alone (P = .20). Quality-adjusted survival analysis is recommended in assessing costs and benefits of toxic adjuvant therapy. In this ex le, it supports the use of chemoendocrine therapy in postmenopausal node-positive patients for a wide range of relative values assigned to periods with symptoms and toxicity.
Publisher: Springer Science and Business Media LLC
Date: 05-01-2005
Publisher: Springer Science and Business Media LLC
Date: 19-04-2012
Publisher: BMJ
Date: 05-2000
DOI: 10.1136/EBM.5.3.76
Publisher: Elsevier BV
Date: 02-1995
Publisher: American College of Physicians
Date: 11-1998
Publisher: Elsevier BV
Date: 11-2017
DOI: 10.1016/J.JCLINEPI.2017.08.011
Abstract: New approaches to evidence synthesis, which use human effort and machine automation in mutually reinforcing ways, can enhance the feasibility and sustainability of living systematic reviews. Human effort is a scarce and valuable resource, required when automation is impossible or undesirable, and includes contributions from online communities ("crowds") as well as more conventional contributions from review authors and information specialists. Automation can assist with some systematic review tasks, including searching, eligibility assessment, identification and retrieval of full-text reports, extraction of data, and risk of bias assessment. Workflows can be developed in which human effort and machine automation can each enable the other to operate in more effective and efficient ways, offering substantial enhancement to the productivity of systematic reviews. This paper describes and discusses the potential-and limitations-of new ways of undertaking specific tasks in living systematic reviews, identifying areas where these human/machine "technologies" are already in use, and where further research and development is needed. While the context is living systematic reviews, many of these enabling technologies apply equally to standard approaches to systematic reviewing.
Publisher: Elsevier BV
Date: 11-1995
DOI: 10.1016/S0002-9149(99)80259-8
Abstract: The Prospective Pravastatin Pooling (PPP) project is a pooled evaluation of 3 large, placebo-controlled, randomized trials of cholesterol-lowering treatment with pravastatin. It is designed to more reliably evaluate the effect of treatment on coronary and all-cause mortality and on total coronary artery disease (CAD) events for specific populations of interest, including women and the elderly. The trials--Long-Term Intervention With Pravastatin in Ischemic Disease trial, the Cholesterol and Recurrent Events trial, and the West of Scotland Coronary Prevention Study--each have common design features, including drug, dose, and duration. The project prospectively defines the objectives, end points, and analytic plans in a protocol developed before results are known of any in idual trial. More than 2,000 (or 10%) of the participants in the pooled data set are women, 1,841 are aged > or = 70 years at trial entry, and > 6,000 have a total cholesterol 1,000 cancers by study completion.(ABSTRACT TRUNCATED AT 250 WORDS)
Publisher: Springer Science and Business Media LLC
Date: 02-05-2007
Publisher: Elsevier BV
Date: 03-2003
Publisher: BMJ
Date: 24-02-2016
DOI: 10.1136/BMJ.I813
Publisher: AMPCo
Date: 02-2002
Publisher: American Academy of Pediatrics (AAP)
Date: 03-2007
Abstract: OBJECTIVE. The goal was to determine the predictors of a prolonged course for children with acute otitis media. METHODS. A meta-analysis of data with the observation groups of 6 randomized, controlled trials was performed. Participants were 824 children, 6 months to 12 years of age, with acute otitis media. The primary outcome was a prolonged course of acute otitis media, which was defined as fever and/or pain at 3 to 7 days. RESULTS. Of the 824 included children, 303 had pain and/or fever at 3 to 7 days. Independent predictors of a prolonged course were age of & years and bilateral acute otitis media. The absolute risk of pain and/or fever at 3 to 7 days for children & years of age with bilateral acute otitis media (20% of all children) was 55%, and that for children ≥2 years of age with unilateral acute otitis media (47% of all children) was 25%. CONCLUSIONS. The risk of a prolonged course was 2 times higher for children & years of age with bilateral acute otitis media than for children ≥2 years of age with unilateral acute otitis media. Clinicians can use these features (ie, age of & years and bilateral acute otitis media) to inform parents more explicitly about the expected course of their child's otitis media and to explain which features should prompt parents to contact their clinician for reexamination of the child.
Publisher: Elsevier BV
Date: 12-2002
Publisher: American Physical Society (APS)
Date: 02-04-2020
Publisher: Springer Science and Business Media LLC
Date: 04-11-2021
DOI: 10.1186/S13756-021-01025-3
Abstract: The effect of eye protection to prevent SARS-CoV-2 infection in the real-world remains uncertain. We aimed to synthesize all available research on the potential impact of eye protection on transmission of SARS-CoV-2. We searched PROSPERO, PubMed, Embase, The Cochrane Library for clinical trials and comparative observational studies in CENTRAL, and Europe PMC for pre-prints. We included studies that reported sufficient data to estimate the effect of any form of eye protection including face shields and variants, goggles, and glasses, on subsequent confirmed infection with SARS-CoV-2. We screened 898 articles and included 6 reports of 5 observational studies from 4 countries (USA, India, Columbia, and United Kingdom) that tested face shields, goggles, and wraparound eyewear on 7567 healthcare workers. The three before-and-after and one retrospective cohort studies showed statistically significant and substantial reductions in SARS-CoV-2 infections favouring eye protection with odds ratios ranging from 0.04 to 0.6, corresponding to relative risk reductions of 96% to 40%. These reductions were not explained by changes in the community rates. However, the one case–control study reported odds ratio favouring no eye protection (OR 1.7, 95% CI 0.99, 3.0). The high heterogeneity between studies precluded any meaningful meta-analysis. None of the studies adjusted for potential confounders such as other protective behaviours, thus increasing the risk of bias, and decreasing the certainty of evidence to very low. Current studies suggest that eye protection may play a role in prevention of SARS-CoV-2 infection in healthcare workers. However, robust comparative trials are needed to clearly determine effectiveness of eye protections and wearability issues in both healthcare and general populations.
Publisher: BMJ
Date: 08-09-2014
Publisher: F1000 Research Ltd
Date: 13-04-2022
DOI: 10.12688/F1000RESEARCH.109490.1
Abstract: Risk prediction models are potentially useful tools for health practitioners and policy makers. When new predictors are proposed to add to existing models, the improvement of discrimination is one of the main measures to assess any increment in performance. In assessing such predictors, we observed two paradoxes: 1) the discriminative ability within all in idual risk strata was worse than for the overall population 2) incremental discrimination after including a new predictor was greater within each in idual risk strata than for the whole population. We show two ex les of the paradoxes and analyse the possible causes. The key cause of bias is use of the same prediction model as for both stratifying the population, and as the base model to which the new predictor is added.
Publisher: National Institute for Health and Care Research
Date: 2014
DOI: 10.3310/HTA18060
Abstract: Antibiotics are still prescribed to most patients attending primary care with acute sore throat, despite evidence that there is modest benefit overall from antibiotics. Targeting antibiotics using either clinical scoring methods or rapid antigen detection tests (RADTs) could help. However, there is debate about which groups of streptococci are important (particularly Lancefield groups C and G), and uncertainty about the variables that most clearly predict the presence of streptococci. This study aimed to compare clinical scores or RADTs with delayed antibiotic prescribing. The study comprised a RADT in vitro study two diagnostic cohorts to develop streptococcal scores (score 1 score 2) and, finally, an open pragmatic randomised controlled trial with nested qualitative and cost-effectiveness studies. The setting was UK primary care general practices. Participants were patients aged ≥ 3 years with acute sore throat. An internet program randomised patients to targeted antibiotic use according to (1) delayed antibiotics (control group), (2) clinical score or (3) RADT used according to clinical score. The main outcome measures were self-reported antibiotic use and symptom duration and severity on seven-point Likert scales (primary outcome: mean sore throat/difficulty swallowing score in the first 2–4 days). The IMI TestPack Plus Strep A (Inverness Medical, Bedford, UK) was sensitive, specific and easy to use. Lancefield group A/C/G streptococci were found in 40% of cohort 2 and 34% of cohort 1. A five-point score predicting the presence of A/C/G streptococci [FeverPAIN: Fever Purulence Attend rapidly (≤ 3 days) severe Inflammation and No cough or coryza] had moderate predictive value (bootstrapped estimates of area under receiver operating characteristic curve: 0.73 cohort 1, 0.71 cohort 2) and identified a substantial number of participants at low risk of streptococcal infection. In total, 38% of cohort 1 and 36% of cohort 2 scored ≤ 1 for FeverPAIN, associated with streptococcal percentages of 13% and 18%, respectively. In an adaptive trial design, the preliminary score (score 1 n = 1129) was replaced by FeverPAIN ( n = 631). For score 1, there were no significant differences between groups. For FeverPAIN, symptom severity was documented in 80% of patients, and was lower in the clinical score group than in the delayed prescribing group (–0.33 95% confidence interval –0.64 to –0.02 p = 0.039 equivalent to one in three rating sore throat a slight rather than moderately bad problem), and a similar reduction was observed for the RADT group (–0.30 –0.61 to 0.00 p = 0.053). Moderately bad or worse symptoms resolved significantly faster (30%) in the clinical score group (hazard ratio 1.30 1.03 to 1.63) but not the RADT group (1.11 0.88 to 1.40). In the delayed group, 75/164 (46%) used antibiotics, and 29% fewer used antibiotics in the clinical score group (risk ratio 0.71 0.50 to 0.95 p = 0.018) and 27% fewer in the RADT group (0.73 0.52 to 0.98 p = 0.033). No significant differences in complications or reconsultations were found. The clinical score group dominated both other groups for both the cost/quality-adjusted life-years and cost/change in symptom severity analyses, being both less costly and more effective, and cost-effectiveness acceptability curves indicated the clinical score to be the most likely to be cost-effective from an NHS perspective. Patients were positive about RADTs. Health professionals’ concerns about test validity, the time the test took and medicalising self-limiting illness lessened after using the tests. For both RADTs and clinical scores, there were tensions with established clinical experience. Targeting antibiotics using a clinical score (FeverPAIN) efficiently improves symptoms and reduces antibiotic use. RADTs used in combination with FeverPAIN provide no clear advantages over FeverPAIN alone, and RADTs are unlikely to be incorporated into practice until health professionals’ concerns are met and they have experience of using them. Clinical scores also face barriers related to clinicians’ perceptions of their utility in the face of experience. This study has demonstrated the limitation of using one data set to develop a clinical score. FeverPAIN, derived from two data sets, appears to be valid and its use improves outcomes, but diagnostic studies to confirm the validity of FeverPAIN in other data sets and settings are needed. Experienced clinicians need to identify barriers to the use of clinical scoring methods. Implementation studies that address perceived barriers in the use of FeverPAIN are needed. Current Controlled Trials ISRCTN32027234. This project was funded by the NIHR Health Technology Assessment programme and will be published in full in Health Technology Assessment Vol. 18, No. 6. See the NIHR Journals Library website for further project information.
Publisher: Elsevier BV
Date: 03-2012
DOI: 10.1016/J.AHJ.2011.12.004
Abstract: In the FIELD study, comparison of the effect of fenofibrate on cardiovascular disease (CVD) between those with prior CVD and without was a prespecified subgroup analysis. The effects of fenofibrate on total CVD events and its components in patients who did (n = 2,131) and did not (n = 7,664) have a history of CVD were computed by Cox proportional hazards modeling and compared by testing for treatment-by-subgroup interaction. The analyses were adjusted for commencement of statins, use of other CVD medications, and baseline covariates. Effects on other CVD end points were explored. Patients with prior CVD were more likely than those without to be male, to be older (by 3.3 years), to have had a history of diabetes for 2 years longer at baseline, and to have diabetic complications, hypertension, and higher rates of use of insulin and CVD medications. Discontinuation of fenofibrate was similar between the subgroups, but more patients with prior CVD than without, and also more placebo than fenofibrate-assigned patients, commenced statin therapy. The borderline difference in the effects of fenofibrate between those who did (hazard ratio [HR] 1.02, 95% CI 0.86-1.20) and did not have prior CVD (HR 0.81, 95% CI 0.70-0.94 heterogeneity P = .045) became nonsignificant after adjustment for baseline covariates and other CVD medications (HR 0.96, 95% CI 0.81-1.14 vs HR 0.78, 95% CI 0.67-0.90) (heterogeneity P = .06). Our findings do not support treating patients with fenofibrate differently based on any history of CVD, in line with evidence from other trials.
Publisher: American College of Physicians
Date: 2000
Publisher: National Institute for Health and Care Research
Date: 12-2009
DOI: 10.3310/HTA13600
Abstract: To determine the diagnostic performance and cost-effectiveness of colour vision testing (CVT) to identify and monitor the progression of diabetic retinopathy (DR). Major electronic databases including MEDLINE, EMBASE, Cumulative Index to Nursing and Allied Health Literature, and Cochrane Database of Systematic Reviews were searched from inception to September 2008. A systematic review of the evidence was carried out according to standard methods. An online survey of National Screening Programme for Diabetic Retinopathy (NSPDR) clinical leads and programme managers assessed the diagnostic tools used routinely by local centres and their views on future research priorities. A decision tree and Markov model was developed to estimate the incremental costs and effects of adding CVT to the current NSPDR. In total, 25 studies on CVT met the inclusion criteria for the review, including 18 presenting 2 x 2 diagnostic accuracy data. The quality of studies and reporting was generally poor. Automated or computerised CVTs reported variable sensitivities (63-97%) and specificities (71-95%). One study reported good diagnostic accuracy estimates for computerised CVT plus retinal photography for detection of sight-threatening DR, but it included few cases of retinopathy in total. Results for pseudoisochromatic plates, anomaloscopes and colour arrangement tests were largely inadequate for DR screening, with Youden indices (sensitivity + specificity - 100%) close to zero. No studies were located that addressed patient preferences relating to CVT for DR. Retinal photography is universally employed as the primary method for retinal screening by centres responding to the online survey none used CVT. The review of the economic evaluation literature found no previous studies describing the cost and effects of any type of CVT. Our economic evaluation suggested that adding CVT to the current national screening programme could be cost-effective if it adequately increases sensitivity and is relatively inexpensive. The deterministic base-case analysis indicated that the cost per quality-adjusted life-year gained may be 6364 pounds and 12,432 pounds for type 1 and type 2 diabetes respectively. However, probabilistic sensitivity analysis highlighted the substantial probability that CVT is not diagnostically accurate enough to be either an effective or a cost-effective addition to current screening methods. The results of the economic model should be treated with caution as the model is based on only one small study. There is insufficient evidence to support the use of CVT alone, or in combination with retinal photography, as a method for screening for retinopathy in patients with diabetes. Better quality diagnostic accuracy studies directly comparing the incremental value of CVT in addition to retinal photography are needed before drawing conclusions on cost-effectiveness. The most frequently cited preference for future research was the use of optical coherence tomography for the detection of clinically significant macular oedema.
Publisher: Springer Science and Business Media LLC
Date: 06-1995
DOI: 10.1007/BF03324311
Abstract: Estrogen receptor α (ERα) is a hormone receptor and key driver for over 70% of breast cancers that has been studied for decades as a transcription factor. Unexpectedly, we discover that ERα is a potent non-canonical RNA-binding protein. We show that ERα RNA binding function is uncoupled from its activity to bind DNA and critical for breast cancer progression. Employing genome-wide cross-linking immunoprecipitation (CLIP) sequencing and a functional CRISPRi screen, we find that ERα-associated mRNAs sustain cancer cell fitness and elicit cellular responses to stress. Mechanistically, ERα controls different steps of RNA metabolism. In particular, we demonstrate that ERα RNA binding mediates alternative splicing of XBP1 and translation of the eIF4G2 and MCL1 mRNAs, which facilitates survival upon stress conditions and sustains tamoxifen resistance of cancer cells. ERα is therefore a multifaceted RNA-binding protein, and this activity transforms our knowledge of post-transcriptional regulation underlying cancer development and drug response.
Publisher: American Medical Association (AMA)
Date: 12-2017
Publisher: Elsevier BV
Date: 04-1996
DOI: 10.1111/J.1753-6405.1996.TB01807.X
Abstract: The incremental costs and effects of annual faecal occult blood test screening in Australia were modelled for a hypothetical cohort of 1000 persons offered screening or not offered screening. Incremental costs and effects were estimated as the differences in direct health care costs (Australian costs) and years of life remaining between the annual-screen group and the control (no screen) group, based on the published results of the Minnesota randomised controlled trial. The cost per life year saved was $24,660. The greatest source of variability in the cost-effectiveness ratio is the effectiveness of screening. The 95 percent confidence interval for cumulative mortality in the annual-screen group is 3.86 to 7.9 per 1000, assuming the control rate is fixed at 8.83 per 1000. With this confidence interval, the cost per life year saved ranges from $12,695 to $67,848. The cost-effectiveness ratio increases to $48,000 if no mortality benefit is assumed beyond the end of the trial follow-up period, 13 years. The results are sensitive to the cost of colonoscopy (at $400 per colonoscopy, the cost per life year saved is $12,319) and the false-positive rate. The cost-effectiveness of colorectal cancer screening is comparable with that of other screening programs but further evidence is needed on the efficacy of screening. Whether the benefits of colorectal cancer screening outweigh the harm and costs needs to be more certain before more resources are committed to mass screening. Health policy planners should initiate planning for Australian pilot projects in the event that the efficacy of screening is confirmed by two current studies.
Publisher: BMJ
Date: 29-08-1998
Abstract: To review effectiveness of screening for colorectal cancer with faecal occult blood test, Hemoccult, and to consider benefits and harms of screening. Systematic review of trials of Hemoccult screening, with meta-analysis of results from the randomised controlled trials. Four randomised controlled trials and two non-randomised trials of about 330 000 and 113 000 people respectively aged >=40 years in five countries. Meta-analysis of effects of screening on mortality from colorectal cancer. Quality of trial design was generally high, and screening resulted in a favourable shift in the stage distribution of colorectal cancers in the screening groups. Meta-analysis of mortality results from the four randomised controlled trials showed that those allocated to screening had a reduction in mortality from colorectal cancer of 16% (relative risk 0.84 (95% confidence interval 0.77 to 0.93)). When adjusted for attendance for screening, this reduction was 23% (relative risk 0.77 (0.57 to 0.89)) for people actually screened. If a biennial Hemoccult screening programme were offered to 10 000 people and about two thirds attended for at least one Hemoccult test, 8.5 (3.6 to 13.5) deaths from colorectal cancer would be prevented over a period of 10 years. Although benefits of screening are likely to outweigh harms for populations at high risk of colorectal cancer, more information is needed about the harmful effects of screening, the community's responses to screening, and costs of screening for different healthcare systems before widespread screening can be recommended.
Publisher: AMPCo
Date: 1991
Publisher: Springer Science and Business Media LLC
Date: 07-02-2020
DOI: 10.1186/S12884-020-2745-1
Abstract: Gestational diabetes mellitus (GDM) - a transitory form of diabetes induced by pregnancy - has potentially important short and long-term health consequences for both the mother and her baby. There is no globally agreed definition of GDM, but definition changes have increased the incidence in some countries in recent years, with some research suggesting minimal clinical improvement in outcomes. The aim of this qualitative systematic review was to identify the psychosocial experiences a diagnosis of GDM has on women during pregnancy and the postpartum period. We searched CINAHL, EMBASE, MEDLINE and PsycINFO databases for studies that provided qualitative data on the psychosocial experiences of a diagnosis of GDM on women across any stage of pregnancy and/or the postpartum period. We appraised the methodological quality of the included studies using the Critical Appraisal Skills Programme Checklist for Qualitative Studies and used thematic analysis to synthesis the data. Of 840 studies identified, 41 studies of erse populations met the selection criteria. The synthesis revealed eight key themes: initial psychological impact communicating the diagnosis knowledge of GDM risk perception management of GDM burden of GDM social support and gaining control. The identified benefits of a GDM diagnosis were largely behavioural and included an opportunity to make healthy eating changes. The identified harms were emotional, financial and cultural. Women commented about the added responsibility (eating regimens, appointments), financial constraints (expensive food, medical bills) and conflicts with their cultural practices (alternative eating, lack of information about traditional food). Some women reported living in fear of risking the health of their baby and conducted extreme behaviours such as purging and starving themselves. A diagnosis of GDM has wide reaching consequences that are common to a erse group of women. Threshold cut-offs for blood glucose levels have been determined using the risk of physiological harms to mother and baby. It may also be advantageous to consider the harms and benefits from a psychosocial and a physiological perspective. This may avoid unnecessary burden to an already vulnerable population.
Publisher: American Board of Family Medicine (ABFM)
Date: 05-2021
Publisher: American Medical Association (AMA)
Date: 11-09-2019
Publisher: Wiley
Date: 17-06-2019
DOI: 10.1111/RESP.13623
Publisher: Elsevier BV
Date: 02-2021
Publisher: Elsevier BV
Date: 09-2009
Publisher: American Medical Association (AMA)
Date: 10-01-2023
Publisher: Elsevier BV
Date: 1990
Publisher: Oxford University Press (OUP)
Date: 06-2002
Publisher: Elsevier BV
Date: 12-1998
DOI: 10.1111/J.1467-842X.1998.TB01501.X
Abstract: In Australia, Vietnamese women are at greater risk of cervical cancer than other Australian women. To increase their participation in cervical screening, the Vietnamese community was exposed to a media c aign about the advantages of cervical smear screening which was delivered in Vietnamese through Vietnamese newspapers and radio. In addition, 689 Vietnamese (18-67 years) were selected from the electoral roll. They were randomly assigned to either receive a personal letter written in Vietnamese promoting cervical screening, or not. We report on the effect of the letter on smear rates. Being randomised to be sent such a letter was not associated with any increase in screening (relative rate of appropriate screening in the intervention versus the control group was 0.85, 95% CI 0.55-1.3). It is important to carefully evaluate untested health promotion interventions.
Publisher: Wiley
Date: 23-06-2015
Publisher: John Wiley & Sons, Ltd
Date: 31-01-2013
Publisher: Springer Science and Business Media LLC
Date: 09-06-2014
Publisher: John Wiley & Sons, Ltd
Date: 26-01-2004
Publisher: SAGE Publications
Date: 06-1989
DOI: 10.1177/0272989X8900900208
Abstract: Usually, when a patient is being investigated, only a small subset of all available tests is performed. Most selection methods for making this choice fail to account for the risk and cost of the test. By attempting to approximate a decision-analytic ideal, via the concept of quasi-utility, the authors developed the information-to-cost ratio and related measures, which balance the utility of the information gained against the price paid. Measurement of the relative importances of diseases is used to further refine the method. Key words: test se lection diagnosis decision analysis. (Med Decis Making 1989 :133-141)
Publisher: AMPCo
Date: 18-01-2019
DOI: 10.5694/MJA2.12061
Abstract: To evaluate the performance of the 2013 Pooled Cohort Risk Equation (PCE-ASCVD) for predicting cardiovascular disease (CVD) in an Australian population to compare this performance with that of three frequently used Framingham-based CVD risk prediction models. Prospective national population-based cohort study. 42 randomly selected urban and non-urban areas in six Australian states and the Northern Territory. 5453 adults aged 40-74 years enrolled in the Australian Diabetes, Obesity and Lifestyle study and followed until November 2011. We excluded participants who had CVD at baseline or for whom data required for risk model calculations were missing. Predicted and observed 10-year CVD risks (adjusted for treatment drop-in) performance (calibration and discrimination) of four CVD risk prediction models: 1991 Framingham, 2008 Framingham, 2008 office-based Framingham, 2013 PCE-ASCVD. The performance of the 2013 PCE-ASCVD model was slightly better than 1991 Framingham, and each was better the two 2008 Framingham risk models, both in men and women. However, all four models overestimated 10-year CVD risk, particularly for patients in higher deciles of predicted risk. The 2013 PCE-ASCVD (7.5% high risk threshold) identified 46% of men and 18% of women as being at high risk the 1991 Framingham model (20% threshold) identified 17% of men and 2% of women as being at high risk. Only 16% of men and 11% of women identified as being at high risk by the 2013 PCE-ASCVD experienced a CV event within 10 years. The 2013 PCE-ASCVD or 1991 Framingham should be used as CVD risk models in Australian. However, the CVD high risk threshold for initiating CVD primary preventive therapy requires reconsideration.
Publisher: Public Library of Science (PLoS)
Date: 19-04-2021
DOI: 10.1371/JOURNAL.PBIO.3001162
Abstract: Many randomized controlled trials (RCTs) are biased and difficult to reproduce due to methodological flaws and poor reporting. There is increasing attention for responsible research practices and implementation of reporting guidelines, but whether these efforts have improved the methodological quality of RCTs (e.g., lower risk of bias) is unknown. We, therefore, mapped risk-of-bias trends over time in RCT publications in relation to journal and author characteristics. Meta-information of 176,620 RCTs published between 1966 and 2018 was extracted. The risk-of-bias probability (random sequence generation, allocation concealment, blinding of patients ersonnel, and blinding of outcome assessment) was assessed using a risk-of-bias machine learning tool. This tool was simultaneously validated using 63,327 human risk-of-bias assessments obtained from 17,394 RCTs evaluated in the Cochrane Database of Systematic Reviews (CDSR). Moreover, RCT registration and CONSORT Statement reporting were assessed using automated searches. Publication characteristics included the number of authors, journal impact factor (JIF), and medical discipline. The annual number of published RCTs substantially increased over 4 decades, accompanied by increases in authors (5.2 to 7.8) and institutions (2.9 to 4.8). The risk of bias remained present in most RCTs but decreased over time for allocation concealment (63% to 51%), random sequence generation (57% to 36%), and blinding of outcome assessment (58% to 52%). Trial registration (37% to 47%) and the use of the CONSORT Statement (1% to 20%) also rapidly increased. In journals with a higher impact factor ( ), the risk of bias was consistently lower with higher levels of RCT registration and the use of the CONSORT Statement. Automated risk-of-bias predictions had accuracies above 70% for allocation concealment (70.7%), random sequence generation (72.1%), and blinding of patients ersonnel (79.8%), but not for blinding of outcome assessment (62.7%). In conclusion, the likelihood of bias in RCTs has generally decreased over the last decades. This optimistic trend may be driven by increased knowledge augmented by mandatory trial registration and more stringent reporting guidelines and journal requirements. Nevertheless, relatively high probabilities of bias remain, particularly in journals with lower impact factors. This emphasizes that further improvement of RCT registration, conduct, and reporting is still urgently needed.
Publisher: JMIR Publications Inc.
Date: 11-03-2022
DOI: 10.2196/31780
Abstract: Mental disorders are a leading cause of distress and disability worldwide. To meet patient demand, there is a need for increased access to high-quality, evidence-based mental health care. Telehealth has become well established in the treatment of illnesses, including mental health conditions. This study aims to conduct a robust evidence synthesis to assess whether there is evidence of differences between telehealth and face-to-face care for the management of less common mental and physical health conditions requiring psychotherapy. In this systematic review, we included randomized controlled trials comparing telehealth (telephone, video, or both) versus the face-to-face delivery of psychotherapy for less common mental health conditions and physical health conditions requiring psychotherapy. The psychotherapy delivered had to be comparable between the telehealth and face-to-face groups, and it had to be delivered by general practitioners, primary care nurses, or allied health staff (such as psychologists and counselors). Patient (symptom severity, overall improvement in psychological symptoms, and function), process (working alliance and client satisfaction), and financial (cost) outcomes were included. A total of 12 randomized controlled trials were included, with 931 patients in aggregate therapies included cognitive behavioral and family therapies delivered in populations encompassing addiction disorders, eating disorders, childhood mental health problems, and chronic conditions. Telehealth was delivered by video in 7 trials, by telephone in 3 trials, and by both in 1 trial, and the delivery mode was unclear in 1 trial. The risk of bias for the 12 trials was low or unclear for most domains, except for the lack of the blinding of participants, owing to the nature of the comparison. There were no significant differences in symptom severity between telehealth and face-to-face therapy immediately after treatment (standardized mean difference [SMD] 0.05, 95% CI −0.17 to 0.27) or at any other follow-up time point. Similarly, there were no significant differences immediately after treatment between telehealth and face-to-face care delivery on any of the other outcomes meta-analyzed, including overall improvement (SMD 0.00, 95% CI −0.40 to 0.39), function (SMD 0.13, 95% CI −0.16 to 0.42), working alliance client (SMD 0.11, 95% CI −0.34 to 0.57), working alliance therapist (SMD −0.16, 95% CI −0.91 to 0.59), and client satisfaction (SMD 0.12, 95% CI −0.30 to 0.53), or at any other time point (3, 6, and 12 months). With regard to effectively treating less common mental health conditions and physical conditions requiring psychological support, there is insufficient evidence of a difference between psychotherapy delivered via telehealth and the same therapy delivered face-to-face. However, there was no includable evidence in this review for some serious mental health conditions, such as schizophrenia and bipolar disorders, and further high-quality research is needed to determine whether telehealth is a viable, equivalent treatment option for these conditions.
Publisher: Springer Science and Business Media LLC
Date: 2004
Publisher: BMJ
Date: 24-06-2023
Publisher: American College of Physicians
Date: 11-2007
Publisher: Springer Science and Business Media LLC
Date: 04-2009
Publisher: American College of Physicians
Date: 06-05-2008
DOI: 10.7326/0003-4819-148-9-200805060-00005
Abstract: Cholesterol level monitoring is a common clinical activity, but the optimal monitoring interval is unknown and practice varies. To estimate, in patients receiving cholesterol-lowering medication, the variation in initial response to treatment, the long-term drift from initial response, and the detectability of long-term changes in on-treatment cholesterol level ("signal") given short-term, within-person variation ("noise"). Analysis of cholesterol measurement data in the LIPID (Long-Term Intervention with Pravastatin in Ischaemic Disease) study. Randomized, placebo-controlled trial in Australia and New Zealand (June 1990 to May 1997). 9014 patients with past coronary heart disease who were randomly assigned to receive pravastatin or placebo. Serial cholesterol concentrations at randomization, 6 months, and 12 months, and then annually to 5 years. Both the placebo and pravastatin groups showed small increases in within-person variability over time. The estimated within-person SD increased from 0.40 mmol/L (15 mg/dL) (coefficient of variation, 7%) to 0.60 mmol/L (23 mg/dL) (coefficient of variation, 11%), but it took almost 4 years for the long-term variation to exceed the short-term variation. This slow increase in variation and the modest increase in mean cholesterol level, about 2% per year, suggest that most of the variation in the study is due to short-term biological and analytic variability. Our calculations suggest that, for patients with levels that are 0.5 mmol/L or more (> or =19 mg/dL) under target, monitoring is likely to detect many more false-positive results than true-positive results for at least the first 3 years after treatment has commenced. Patients may respond differently to agents other than pravastatin. Future values for nonadherent patients were imputed. The signal-noise ratio in cholesterol level monitoring is weak. The signal of a small increase in cholesterol level is difficult to detect against the background of a short-term variability of 7%. In annual rechecks in adherent patients, many apparent increases in cholesterol level may be false positive. Independent of the office visit schedule, the interval for monitoring patients who are receiving stable cholesterol-lowering treatment could be lengthened.
Publisher: Elsevier BV
Date: 2016
DOI: 10.1016/J.JCLINEPI.2015.05.006
Abstract: Scoring systems are developed to assist clinicians in making a diagnosis. However, their uptake is often limited because they are cumbersome to use, requiring information on many predictors, or complicated calculations. We examined whether, and how, simplifications affected the performance of a validated score for identifying adults with chest pain in an emergency department who have low risk of major adverse cardiac events. We simplified the Emergency Department Assessment of Chest pain Score (EDACS) by three methods: (1) giving equal weight to each predictor included in the score, (2) reducing the number of predictors, and (3) using both methods--giving equal weight to a reduced number of predictors. The diagnostic accuracy of the simplified scores was compared with the original score in the derivation (n = 1,974) and validation (n = 909) data sets. There was no difference in the overall accuracy of the simplified versions of the score compared with the original EDACS as measured by the area under the receiver operating characteristic curve (0.74 to 0.75 for simplified versions vs. 0.75 for the original score in the validation cohort). With score cut-offs set to maintain the sensitivity of the combination of score and tests (electrocardiogram and cardiac troponin) at a level acceptable to clinicians (99%), simplification reduced the proportion of patients classified as low risk from 50% with the original score to between 22% and 42%. Simplification of a clinical score resulted in similar overall accuracy but reduced the proportion classified as low risk and therefore eligible for early discharge compared with the original score. Whether the trade-off is acceptable, will depend on the context in which the score is to be used. Developers of clinical scores should consider simplification as a method to increase uptake, but further studies are needed to determine the best methods of deriving and evaluating simplified scores.
Publisher: AMPCo
Date: 06-2012
DOI: 10.5694/MJA12.10236
Publisher: Elsevier BV
Date: 09-2021
Publisher: BMJ
Date: 11-12-2015
DOI: 10.1136/BJOPHTHALMOL-2015-306757
Abstract: To assess the efficiency of alternative monitoring services for people with ocular hypertension (OHT), a glaucoma risk factor. Discrete event simulation model comparing five alternative care pathways: treatment at OHT diagnosis with minimal monitoring biennial monitoring (primary and secondary care) with treatment if baseline predicted 5-year glaucoma risk is ≥6% monitoring and treatment aligned to National Institute for Health and Care Excellence (NICE) glaucoma guidance (conservative and intensive). UK health services perspective. Simulated cohort of 10 000 adults with OHT (mean intraocular pressure (IOP) 24.9 mm Hg (SD 2.4). Costs, glaucoma detected, quality-adjusted life years (QALYs). Treating at diagnosis was the least costly and least effective in avoiding glaucoma and progression. Intensive monitoring following NICE guidance was the most costly and effective. However, considering a wider cost-utility perspective, biennial monitoring was less costly and provided more QALYs than NICE pathways, but was unlikely to be cost-effective compared with treating at diagnosis (£86 717 per additional QALY gained). The findings were robust to risk thresholds for initiating monitoring but were sensitive to treatment threshold, National Health Service costs and treatment adherence. For confirmed OHT, glaucoma monitoring more frequently than every 2 years is unlikely to be efficient. Primary treatment and minimal monitoring (assessing treatment responsiveness (IOP)) could be considered however, further data to refine glaucoma risk prediction models and value patient preferences for treatment are needed. Consideration to innovative and affordable service redesign focused on treatment responsiveness rather than more glaucoma testing is recommended.
Publisher: SAGE Publications
Date: 09-2021
Publisher: Wiley
Date: 12-1998
DOI: 10.1046/J.1440-1754.1998.00304.X
Abstract: Although not explicit, recommendations in the new edition of Therapeutic Guidelines: Antibiotic have taken a lurch towards an evidence basis. What does this mean, and what is the basis of the recommendation that antibiotics be used for sore throat in very limited circumstances?
Publisher: Elsevier BV
Date: 09-2007
DOI: 10.1016/J.JCLINEPI.2006.12.004
Abstract: In systematic reviews of interventions of studies where randomization was done by in idual but data are paired (such as eyes, ears), it is necessary to account for the natural clustering present. The Cochrane Handbook suggests treating these as ex les of cluster randomized trials. An incorrect analysis (without adjustment) would usually overestimate the precision of the estimate. We discuss a simple method of adjustment that deals with this problem. From a cross-tabulation of the event being present on the "left" and "right" body part, we estimate the design effect which is a measure of the inflation on the variance due to clustering. This estimate is then used to obtain an adjusted effect size per trial by reducing the number of events and the s le size in each intervention group. In a systematic review on Auto-inflation for Glue Ear, data on improvement were obtained for pairs of ears. The design effect obtained from these data was 1.25. In a meta-analysis, the weights given to the trials changed after adjustment from 33% to 11% in one case. In a systematic review, when dealing with paired data, it is possible to give adequate weighting to each trial using a simple adjusting method.
Publisher: AMPCo
Date: 22-07-2021
DOI: 10.5694/MJA2.51182
Publisher: BMJ
Date: 02-2014
Publisher: Oxford University Press (OUP)
Date: 21-02-2012
DOI: 10.1093/NDT/GFS022
Abstract: Diabetes and chronic kidney disease (CKD) are both associated with an increased risk of cancer but it is unclear whether diabetes complicated by CKD further augments an in idual's cancer risk. The aim of our study was to determine the association of CKD [defined as an estimated glomerular filtration rate (eGFR) < 60 mL/min] with the overall and site-specific risks of incident cancers among in iduals with Type 2 diabetes. Cox proportional hazard regression models and competing risk analyses were used to examine the univariate and multivariate adjusted associations between reduced kidney function and the overall and site-specific risks of cancer in participants enrolled in the Action in Diabetes and Vascular disease: Preterax and Diamicron MR controlled evaluation (ADVANCE) trial. Over a median follow-up of 5.0 years, 700 malignant neoplasms occurred in the 11 140 (6.4%) participants. There was no increase in overall cancer risk [adjusted hazard ratio: 1.07 (95% confidence interval: 0.89-1.29, P = 0.50)] or site-specific cancer risk for in iduals with CKD (defined as eGFR < 60 mL/min) compared to those without CKD at baseline. These results were robust to multiple methods and thresholds used to estimate CKD. Mild to moderate CKD does not increase the risk of cancer in people with Type 2 diabetes. ADVANCE is registered with ClincalTrial.gov (number NCT00145925).
Publisher: Elsevier BV
Date: 03-2021
Publisher: American College of Physicians
Date: 16-12-2008
Publisher: Oxford University Press (OUP)
Date: 02-09-2001
Publisher: University Library System, University of Pittsburgh
Date: 20-07-2021
Abstract: Objective: The decisions and processes that may compose a systematic search strategy have not been formally identified and categorized. This study aimed to (1) identify all decisions that could be made and processes that could be used in a systematic search strategy and (2) create a hierarchical framework of those decisions and processes.Methods: The literature was searched for documents or guides on conducting a literature search for a systematic review or other evidence synthesis. The decisions or processes for locating studies were extracted from eligible documents and categorized into a structured hierarchical framework. Feedback from experts was sought to revise the framework. The framework was revised iteratively and tested using recently published literature on systematic searching.Results: Guidance documents were identified from expert organizations and a search of the literature and Internet. Data were extracted from 74 eligible documents to form the initial framework. The framework was revised based on feedback from 9 search experts and further review and testing by the authors. The hierarchical framework consists of 119 decisions or processes sorted into 17 categories and arranged under 5 topics. These topics are “Skill of the searcher,” “Selecting information to identify,” “Searching the literature electronically,” “Other ways to identify studies,” and “Updating the systematic review.”Conclusions:The work identifies and classifies the decisions and processes used in systematic searching. Future work can now focus on assessing and prioritizing research on the best methods for successfully identifying all eligible studies for a systematic review.
Publisher: Oxford University Press (OUP)
Date: 2003
DOI: 10.1373/49.1.1
Abstract: Background: To comprehend the results of diagnostic accuracy studies, readers must understand the design, conduct, analysis, and results of such studies. That goal can be achieved only through complete transparency from authors. Objective: To improve the accuracy and completeness of reporting of studies of diagnostic accuracy to allow readers to assess the potential for bias in the study and to evaluate its generalisability. Methods: The Standards for Reporting of Diagnostic Accuracy (STARD) steering committee searched the literature to identify publications on the appropriate conduct and reporting of diagnostic studies and extracted potential items into an extensive list. Researchers, editors, and members of professional organisations shortened this list during a two-day consensus meeting with the goal of developing a checklist and a generic flow diagram for studies of diagnostic accuracy. Results: The search for published guidelines on diagnostic research yielded 33 previously published checklists, from which we extracted a list of 75 potential items. The consensus meeting shortened the list to 25 items, using evidence on bias whenever available. A prototypical flow diagram provides information about the method of patient recruitment, the order of test execution and the numbers of patients undergoing the test under evaluation, the reference standard or both. Conclusions: Evaluation of research depends on complete and accurate reporting. If medical journals adopt the checklist and the flow diagram, the quality of reporting of studies of diagnostic accuracy should improve to the advantage of clinicians, researchers, reviewers, journals, and the public.
Publisher: BMJ
Date: 24-09-2008
DOI: 10.1136/BMJ.A1253
Publisher: Oxford University Press (OUP)
Date: 2003
DOI: 10.1373/49.1.7
Abstract: The quality of reporting of studies of diagnostic accuracy is less than optimal. Complete and accurate reporting is necessary to enable readers to assess the potential for bias in the study and to evaluate the generalisability of the results. A group of scientists and editors has developed the STARD (Standards for Reporting of Diagnostic Accuracy) statement to improve the reporting the quality of reporting of studies of diagnostic accuracy. The statement consists of a checklist of 25 items and flow diagram that authors can use to ensure that all relevant information is present. This explanatory document aims to facilitate the use, understanding and dissemination of the checklist. The document contains a clarification of the meaning, rationale and optimal use of each item on the checklist, as well as a short summary of the available evidence on bias and applicability. The STARD statement, checklist, flowchart and this explanation and elaboration document should be useful resources to improve reporting of diagnostic accuracy studies. Complete and informative reporting can only lead to better decisions in healthcare.
Publisher: University of Toronto Press Inc. (UTPress)
Date: 09-2022
Abstract: BACKGROUND: Recent observational studies suggest that vaccines may have little effect in preventing infection with the Omicron variant of severe acute respiratory syndrome coronavirus 2. However, the observed effects may be confounded by patient factors, preventive behaviours, or differences in testing behaviour. To assess potential confounding, we examined differences in testing behaviour between unvaccinated and vaccinated populations. METHODS: We recruited 1,526 Australian adults for an online randomized study about coronavirus disease 2019 (COVID-19) testing in late 2021, collecting self-reported vaccination status and three measures of COVID-19 testing behaviour: testing in past month or ever and test intention if they woke with a sore throat. We examined the association between testing intentions and vaccination status in the trial’s baseline data. RESULTS: Of the 1,526 participants (mean age 31 y), 22% had a COVID-19 test in the past month and 61% ever 17% were unvaccinated, 11% were partially vaccinated (one dose), and 71% were fully vaccinated (two or more doses). Fully vaccinated participants were twice as likely as those who were unvaccinated (relative risk [RR] 2.2, 95% CI 1.8 to 2.8, p 0.001) to report positive COVID testing intentions. Partially vaccinated participants had less positive intentions than fully vaccinated participants (RR 0.68, 95% CI 0.52 to 0.89, p 0.001) but higher intentions than unvaccinated participants (RR 1.5, 95% CI 1.4 to 1.6, p = 0.002). DISCUSSION: Vaccination predicted greater COVID-19 testing intentions and would substantially bias observed vaccine effectiveness. To account for differential testing behaviours, test-negative designs are currently the preferred option, but their assumptions need more thorough examination.
Publisher: American Physical Society (APS)
Date: 30-09-2019
Publisher: European Respiratory Society (ERS)
Date: 03-2019
DOI: 10.1183/20734735.0354-2018
Abstract: Challenges in the diagnostic process of chronic obstructive pulmonary disease (COPD) can result in diagnostic misclassifications, including overdiagnosis. The term “overdiagnosis” in general has been associated with variable definitions. In connection with efforts to reduce low-value care, “overdiagnosis” has been defined as a true positive diagnosis of a condition that is not associated with any harm in the diagnosed person. It is, however, unclear how the term “overdiagnosis” is used in the COPD literature. We conducted a rapid review of the literature to explore how the terms “overdiagnosis” and “misdiagnosis” are used in the context of COPD. Electronic searches of Medline were conducted from inception to October 2018, to identify primary studies that reported on over- and/or misdiagnosis of COPD using these terms. 28 articles were included in this review. Overdiagnosis and misdiagnosis in COPD were found to be used to describe five main concepts: 1) physician COPD diagnosis despite normal spirometry (14 studies) 2) discordant results for COPD diagnosis based on different spirometry-based definitions for airflow obstruction (10 studies) 3) COPD diagnosis based on pre-bronchodilator spirometry results (three studies) 4) comorbidities ( e.g. heart failure or asthma) that affect spirometry and have clinical features which overlap with COPD (two studies) and 5) normalisation of abnormal (post-bronchodilator) spirometry at follow-up (one study). The terms “overdiagnosis” and “misdiagnosis” were often used interchangeably and almost always referred to a false positive diagnosis. Performing (technically correct) spirometry with correct interpretation of the results could probably reduce misdiagnosis in a large proportion of the misdiagnosed cases of COPD. In addition, guidelines need to provide a more acceptable consensus spirometric definition of airflow obstruction. In the COPD literature, the terms “overdiagnosis” and “misdiagnosis” are often used interchangeably and almost always refer to a false positive diagnosis. Use of spirometry with correct interpretation of the results can avoid a substantial proportion of cases of misdiagnosis of COPD. To explore the use of the terms “overdiagnosis” and “misdiagnosis” in the COPD literature. To identify the main sources of overdiagnosis and misdiagnosis in COPD.
Publisher: Elsevier BV
Date: 03-2017
Publisher: BMJ
Date: 21-04-2011
Abstract: To determine the accuracy of using nitroglycerine as a 'test of treatment' in the diagnosis of cardiac chest pain we undertook a systematic review of studies of diagnostic accuracy. Databases searched included PubMed, Cochrane Database, Google Scholar, Science Citation Index, EMBASE and manual searching of bibliographies of known primary and review articles. Studies were included if sublingual nitroglycerine was the index test, its effect on the patient's pain score was recorded and the reference test was performed on at least 80% of patients. The data from the five papers were used to form 2×2 contingency tables. Five eligible studies were found, all in the acute setting (although one paper collected its data in the follow-up setting, all patients had acute presentations). The sensitivity ranged from 35% to 92% and the specificity from 12% to 63%. However, in all but one paper the Youden indices were close to zero suggesting that the response to nitroglycerine is not useful as a diagnostic test. The combined sensitivity was 0.52 (95% CI 0.48 to 0.56) and combined specificity was 0.49 (95% CI 0.46 to 0.52). The diagnostic OR from the combined studies was 1.2 (95% CI 0.97 to 1.5), which is not significantly different from 1. In the acute setting, nitroglycerine is not a reliable test of treatment for use in the diagnosis of coronary artery disease. However, further studies are needed to determine the diagnostic accuracy of nitroglycerine for recurrent exertional chest pain.
Publisher: Ovid Technologies (Wolters Kluwer Health)
Date: 17-09-2013
DOI: 10.1161/CIRCULATIONAHA.113.002717
Abstract: Recent evidence suggests that visit-to-visit variability in systolic blood pressure (SBP) and maximum SBP are predictors of cardiovascular disease. However, it remains uncertain whether these parameters predict the risks of macrovascular and microvascular complications in patients with type 2 diabetes mellitus. The Action in Diabetes and Vascular Disease: Preterax and Diamicron Modified Release Controlled Evaluation (ADVANCE) was a factorial randomized controlled trial of blood pressure lowering and blood glucose control in patients with type 2 diabetes mellitus. The present analysis included 8811 patients without major macrovascular and microvascular events or death during the first 24 months after randomization. SBP variability (defined as standard deviation) and maximum SBP were determined during the first 24 months after randomization. During a median 2.4 years of follow-up from the 24-month visit, 407 major macrovascular (myocardial infarction, stroke, or cardiovascular death) and 476 microvascular (new or worsening nephropathy or retinopathy) events were observed. The association of major macrovascular and microvascular events with SBP variability was continuous even after adjustment for mean SBP and other confounding factors (both P .05 for trend). Hazard ratios (95% confidence intervals) for the highest tenth of SBP variability were 1.54 (0.99–2.39) for macrovascular events and 1.84 (1.19–2.84) for microvascular events in comparison with the lowest tenth. For maximum SBP, hazard ratios (95% confidence intervals) for the highest tenth were 3.64 (1.73–7.66) and 2.18 (1.04–4.58), respectively. Visit-to-visit variability in SBP and maximum SBP were independent risk factors for macrovascular and microvascular complications in type 2 diabetes mellitus. URL: www.clinicaltrials.gov . Unique Identifier: NCT00145925.
Publisher: Springer Science and Business Media LLC
Date: 23-04-2018
DOI: 10.1038/S41366-018-0067-4
Abstract: The objective of this study was to determine whether habit-based interventions are clinically beneficial in achieving long-term (12-month) weight loss maintenance and explore whether making new habits or breaking old habits is more effective. Volunteer community members aged 18-75 years who had overweight or obesity (BMI ≥ 25 kg/m Of the 130 participants assessed for eligibility, 75 adults (mean BMI 34.5 kg/m Habit-based weight-loss interventions-forming new habits (TTT) and breaking old habits (DSD), resulted in clinically important weight-loss maintenance at 12-month follow-up.
Publisher: SAGE Publications
Date: 07-2003
DOI: 10.1258/000456303766476986
Abstract: Background: To improve the accuracy and completeness of reporting of studies of diagnostic accuracy in order to allow readers to assess the potential for bias in a study and to evaluate the generalizability of its results. Methods: The Standards for Reporting of Diagnostic Accuracy (STARD) steering committee searched the literature to identify publications on the appropriate conduct and reporting of diagnostic studies and extracted potential items into an extensive list. Researchers, editors and members of professional organizations shortened this list during a 2-day consensus meeting with the goal of developing a checklist and a generic flow diagram for studies of diagnostic accuracy. Results: The search for published guidelines regarding diagnostic research yielded 33 previously published checklists, from which we extracted a list of 75 potential items. At the consensus meeting, participants shortened the list to 25 items, using evidence on bias whenever available. A prototypical flow diagram provides information about the method of patient recruitment, the order of test execution and the numbers of patients undergoing the test under evaluation, the reference standard or both. Conclusions: Evaluation of research depends on complete and accurate reporting. If medical journals adopt the checklist and the flow diagram, the quality of reporting of studies of diagnostic accuracy should improve to the advantage of the clinicians, researchers, reviewers, journals and the public.
Publisher: CSIRO Publishing
Date: 18-03-2021
DOI: 10.1071/AH20160
Abstract: Objectives Healthcare expenditure is growing at an unsustainable rate in developed countries. A recent scoping review identified several alternative healthcare delivery models with the potential to improve health system sustainability. Our objective was to obtain input and consensus from an expert Delphi panel about which alternative models they considered most promising for increasing value in healthcare delivery in Australia and to contribute to shaping a research agenda in the field. Methods The panel first reviewed a list of 84 models obtained through the preceding scoping review and contributed additional ideas in an open round. In a subsequent scoring round, the panel rated the priority of each model in terms of its potential to improve health care sustainability in Australia. Consensus was assumed when ≥50% of the panel rated a model as (very) high priority (consensus on high priority) or as not a priority or low priority (consensus on low priority). Results Eighty-two of 149 invited participants (55%) representing all Australian states/territories and wide expertise completed round one 71 completed round two. Consensus on high priority was achieved for 59 alternative models 14 were rated as (very) high priority by ≥70% of the panel. Top priorities included improving medical service provision in aged care facilities, providing single-point-access multidisciplinary care for people with chronic conditions and providing tailored early discharge and hospital at home instead of in-patient care. No consensus was reached on 47 models, but no model was deemed low priority. Conclusions Input from an expert stakeholder panel identified healthcare delivery models not previously synthesised in systematic reviews that are a priority to investigate. Strong consensus exists among stakeholders regarding which models require the most urgent attention in terms of (cost-)effectiveness research. These findings contribute to shaping a research agenda on healthcare delivery models and where stakeholder engagement in Australia is likely to be high. What is known about the topic? Healthcare expenditure is growing at an unsustainable rate in high-income countries worldwide. A recent scoping review of systematic reviews identified a substantial body of evidence about the effects of a wide range of models of healthcare service delivery that can inform health system improvements. Given the large number of systematic reviews available on numerous models of care, a method for gaining consensus on the models of highest priority for implementation (where evidence demonstrates this will lead to beneficial effects and resource savings) or for further research (where evidence about effects is uncertain) in the Australian context is warranted. What does this paper add? This paper describes a method for reaching consensus on high-priority alternative models of service delivery in Australia. Stakeholders with leadership roles in health policy and government organisations, hospital and primary care networks, academic institutions and consumer advocacy organisations were asked to identify and rate alternative models based on their knowledge of the healthcare system. We reached consensus among ≥70% of stakeholders that improving medical care in residential aged care facilities, providing single-point-access multidisciplinary care for patients with a range of chronic conditions and providing early discharge and hospital at home instead of in-patient stay for people with a range of conditions are of highest priority for further investigation. What are the implications for practitioners? Decision makers seeking to optimise the efficiency and sustainability of healthcare service delivery in Australia could consider the alternative models rated as high priority by the expert stakeholder panel in this Delphi study. These models reflect the most promising alternatives for increasing value in the delivery of health care in Australia based on stakeholders’ knowledge of the health system. Although they indicate areas where stakeholder engagement is likely to be high, further research is needed to demonstrate the effectiveness and cost-effectiveness of some of these models.
Publisher: BMJ
Date: 24-10-2008
DOI: 10.1136/BMJ.A1499
Publisher: BMJ
Date: 31-07-2009
DOI: 10.1136/EBM.14.4.103
Publisher: Oxford University Press (OUP)
Date: 02-2013
DOI: 10.1093/JNCI/DJS649
Abstract: Cancer screening is widely practiced and participation is promoted by various social, technical, and commercial drivers, but there are growing concerns about the emerging harms, risks, and costs of cancer screening. Deliberative democracy methods engage citizens in dialogue on substantial and complex problems: especially when evidence and values are important and people need time to understand and consider the relevant issues. Information derived from such deliberations can provide important guidance to cancer screening policies: citizens' values are made explicit, revealing what really matters to people and why. Policy makers can see what informed, rather than uninformed, citizens would decide on the provision of services and information on cancer screening. Caveats can be elicited to guide changes to existing policies and practices. Policies that take account of citizens' opinions through a deliberative democracy process can be considered more legitimate, justifiable, and feasible than those that don't.
Publisher: BMJ
Date: 07-03-2014
DOI: 10.1136/BMJ.G1687
Abstract: Without a complete published description of interventions, clinicians and patients cannot reliably implement interventions that are shown to be useful, and other researchers cannot replicate or build on research findings. The quality of description of interventions in publications, however, is remarkably poor. To improve the completeness of reporting, and ultimately the replicability, of interventions, an international group of experts and stakeholders developed the Template for Intervention Description and Replication (TIDieR) checklist and guide. The process involved a literature review for relevant checklists and research, a Delphi survey of an international panel of experts to guide item selection, and a face to face panel meeting. The resultant 12 item TIDieR checklist (brief name, why, what (materials), what (procedure), who provided, how, where, when and how much, tailoring, modifications, how well (planned), how well (actual)) is an extension of the CONSORT 2010 statement (item 5) and the SPIRIT 2013 statement (item 11). While the emphasis of the checklist is on trials, the guidance is intended to apply across all evaluative study designs. This paper presents the TIDieR checklist and guide, with an explanation and elaboration for each item, and ex les of good reporting. The TIDieR checklist and guide should improve the reporting of interventions and make it easier for authors to structure accounts of their interventions, reviewers and editors to assess the descriptions, and readers to use the information.
Publisher: AMPCo
Date: 06-1995
DOI: 10.5694/J.1326-5377.1995.TB126047.X
Abstract: To carry out a systematic quality review and meta-analysis of all randomised trials of mammographic screening that included women aged under 50 years. Reports of randomised trials of mammographic screening were identified via MEDLINE and checks of the bibliographies of retrieved articles and reviews. Identified trials were assessed for: (i) method of randomisation (ii) documented comparability of baseline data (iii) standardised criteria for breast cancer death (iv) blinded review of cause of death (v) completeness of follow-up and (vi) use of an "intention-to-treat analysis". Seven randomised trials including almost 160,000 women aged under 50 were studied. The combined estimate of relative risk was 0.95 (95% confidence interval, 0.77-1.18), a statistically non-significant reduction of 5%. Adjustment for the cluster randomisation of two trials, and for degree of compliance, did not substantially change this result. These analyses suggest little, if any, benefit for women under 50 years of age. The results are not explained by the quality of the trials or the radiology. We recommend that women in this age group intending to be screened should be fully informed of these results.
Publisher: BMJ
Date: 1998
DOI: 10.1136/HRT.79.1.7
Abstract: The unmanned surface vehicle (USV) has attracted more and more attention because of its basic ability to perform complex maritime tasks autonomously in constrained environments. However, the level of autonomy of one single USV is still limited, especially when deployed in a dynamic environment to perform multiple tasks simultaneously. Thus, a multi-USV cooperative approach can be adopted to obtain the desired success rate in the presence of multi-mission objectives. In this paper, we propose a cooperative navigating approach by enabling multiple USVs to automatically avoid dynamic obstacles and allocate target areas. To be specific, we propose a multi-agent deep reinforcement learning (MADRL) approach, i.e., a multi-agent deep deterministic policy gradient (MADDPG), to maximize the autonomy level by jointly optimizing the trajectory of USVs, as well as obstacle avoidance and coordination, which is a complex optimization problem usually solved separately. In contrast to other works, we combined dynamic navigation and area assignment to design a task management system based on the MADDPG learning framework. Finally, the experiments were carried out on the Gym platform to verify the effectiveness of the proposed method.
Publisher: Frontiers Media SA
Date: 14-01-2021
DOI: 10.3389/FPHAR.2020.577747
Abstract: Background : Cumulative anticholinergic exposure, also known as anticholinergic burden, is associated with a variety of adverse outcomes. However, studies show that anticholinergic effects tend to be underestimated by prescribers, and anticholinergics are the most frequently prescribed potentially inappropriate medication in older patients. The grading systems and drugs included in existing scales to quantify anticholinergic burden differ considerably and do not adequately account for patients’ susceptibility to medications. Furthermore, their ability to link anticholinergic burden with adverse outcomes such as falls is unclear. This study aims to develop a prognostic model that predicts falls in older general practice patients, to assess the performance of several anticholinergic burden scales, and to quantify the added predictive value of anticholinergic symptoms in this context. Methods : Data from two cluster-randomized controlled trials investigating medication optimization in older general practice patients in Germany will be used. One trial (RIME, n = 1,197) will be used for the model development and the other trial (PRIMUM, n = 502) will be used to externally validate the model. A priori, candidate predictors will be selected based on a literature search, predictor availability, and clinical reasoning. Candidate predictors will include socio-demographics (e.g. age, sex), morbidity (e.g. single conditions), medication (e.g. polypharmacy, anticholinergic burden as defined by scales), and well-being (e.g. quality of life, physical function). A prognostic model including sociodemographic and lifestyle-related factors, as well as variables on morbidity, medication, health status, and well-being, will be developed, whereby the prognostic value of extending the model to include additional patient-reported symptoms will be also assessed. Logistic regression will be used for the binary outcome, which will be defined as “no falls” vs. “≥1 fall” within six months of baseline, as reported in patient interviews. Discussion : As the ability of different anticholinergic burden scales to predict falls in older patients is unclear, this study may provide insights into their relative importance as well as into the overall contribution of anticholinergic symptoms and other patient characteristics. The results may support general practitioners in their clinical decision-making and in prescribing fewer medications with anticholinergic properties.
Publisher: CMA Impact Inc.
Date: 02-10-2023
Publisher: Elsevier BV
Date: 10-2006
Publisher: Elsevier BV
Date: 02-2014
DOI: 10.1016/J.JCLINEPI.2013.07.015
Abstract: To describe how evidence from trials and cohort studies may be used to guide choice of test for monitoring patients with chronic disease. Exploration of potential criteria for choosing the best monitoring test. Criteria are defined and options for assessment measures for test performance on each criterion discussed. Monitoring in clinical practice occurs in three main phases: before treatment, response to treatment, and long-term monitoring. Four important criteria may be used to choose the best test for monitoring a patient in each of these phases. Clinical validity describes the ability of the test to predict the clinically relevant outcome that we are trying to control or prevent. Responsiveness describes how much the test changes in response to an intervention relative to background random variation. Detectability of long-term change describes the size of changes in the test over the long term relative to background random variation. Practicality describes the ease of use, invasiveness, and cost of the test. Test performance generally requires longitudinal data from trial and/or cohort studies using statistical methods such as those discussed. Four specific criteria can help clinicians inform evidence-based decisions on which monitoring test to use.
Publisher: BMJ
Date: 15-11-2003
Publisher: Hogrefe Publishing Group
Date: 2002
Publisher: BMJ
Date: 02-2021
DOI: 10.1136/BMJOPEN-2020-043421
Abstract: Public cooperation to practise preventive health behaviours is essential to manage the transmission of infectious diseases such as COVID-19. We aimed to investigate beliefs about COVID-19 diagnosis, transmission and prevention that have the potential to impact the uptake of recommended public health strategies. An online cross-sectional survey. A national s le of 1500 Australian adults with representative quotas for age and gender provided by an online panel provider. Proportion of participants with correct/incorrect knowledge of COVID-19 preventive behaviours and reasons for misconceptions. Of the 1802 potential participants contacted, 289 did not qualify, 13 declined and 1500 participated in the survey (response rate 83%). Most participants correctly identified ‘washing your hands regularly with soap and water’ (92%) and ‘staying at least 1.5 m away from others’ (90%) could help prevent COVID-19. Over 40% (incorrectly) considered wearing gloves outside of the home would prevent them from contracting COVID-19. Views about face masks were ided. Only 66% of participants correctly identified that ‘regular use of antibiotics’ would not prevent COVID-19. Most participants (90%) identified ‘fever, fatigue and cough’ as indicators of COVID-19. However, 42% of participants thought that being unable to ‘hold your breath for 10 s without coughing’ was an indicator of having the virus. The most frequently reported sources of COVID-19 information were commercial television channels (56%), the Australian Broadcasting Corporation (43%) and the Australian Government COVID-19 information app (31%). Public messaging about hand hygiene and physical distancing to prevent transmission appears to have been effective. However, there are clear, identified barriers for many in iduals that have the potential to impede uptake or maintenance of these behaviours in the long term. We need to develop public health messages that harness these barriers to improve future cooperation. Ensuring adherence to these interventions is critical.
Publisher: JMIR Publications Inc.
Date: 03-06-2019
DOI: 10.2196/13199
Publisher: Springer Science and Business Media LLC
Date: 07-02-2018
Publisher: BMJ
Date: 22-12-2008
DOI: 10.1136/EBN.12.1.7
Publisher: American Medical Association (AMA)
Date: 19-05-1999
Abstract: Clinicians can often find treatment recommendations in traditional narrative reviews and the discussion sections of original articles and meta-analyses. Making a treatment recommendation involves framing a question, identifying management options and outcomes, collecting and summarizing evidence, and applying value judgments or preferences to arrive at an optimal course of action. Each step in this process can be conducted systematically (thus protecting against bias) or unsystematically (leaving the process open to bias). Clinicians faced with a plethora of recommendations may wish to attend to those that are less likely to be biased. Therefore, we propose a hierarchy of rigor of recommendations to guide clinicians when judging the usefulness of particular recommendations. Recommendations with the highest rigor consider all relevant options and outcomes, include a comprehensive collection of the methodologically highest quality data with an explicit strategy for summarizing the data (that is, a systematic review), and make an explicit statement of the values or preferences involved in moving from evidence to action. High rigor recommendations come from systematically developed, evidence-based practice guidelines or rigorously conducted decision analyses. Systematic reviews, which typically do not consider all relevant options and outcomes or make the preferences underlying recommendations explicit, offer intermediate rigor recommendations. Traditional approaches in which the collection and assessment of evidence remains unsystematic, all relevant options and outcomes may not be considered, and values remain implicit, provide recommendations of weak rigor. In an era in which clinicians are barraged by recommendations as to how to manage their patients, this hierarchy provides a potentially useful set of guides.
Publisher: Wiley
Date: 30-05-2000
DOI: 10.1002/(SICI)1097-0258(20000530)19:10<1295::AID-SIM493>3.0.CO;2-Z
Abstract: A traditional measure of effect size associated with tests for difference between two groups is the variance explained by group membership (R(2)). If exposure to a disease causes a small but long term deficit in performance, however, R(2) does not capture that cumulating effect. We propose an alternative statistic, gamma, based on the probability of an unexposed person outperforming an exposed person. Although gamma is also a point estimate, it more easily conveys what the cumulating effect of a deficit would be. We discuss some of the advantages of this measure.
Publisher: BMJ
Date: 10-08-2011
DOI: 10.1136/BMJ.D5043
Publisher: Springer Science and Business Media LLC
Date: 26-04-2018
DOI: 10.1007/S41114-018-0012-9
Abstract: We present possible observing scenarios for the Advanced LIGO, Advanced Virgo and KAGRA gravitational-wave detectors over the next decade, with the intention of providing information to the astronomy community to facilitate planning for multi-messenger astronomy with gravitational waves. We estimate the sensitivity of the network to transient gravitational-wave signals, and study the capability of the network to determine the sky location of the source. We report our findings for gravitational-wave transients, with particular focus on gravitational-wave signals from the inspiral of binary neutron star systems, which are the most promising targets for multi-messenger astronomy. The ability to localize the sources of the detected signals depends on the geographical distribution of the detectors and their relative sensitivity, and $$90\\%$$ 90 % credible regions can be as large as thousands of square degrees when only two sensitive detectors are operational. Determining the sky position of a significant fraction of detected signals to areas of 5– $$20~\\mathrm {deg}^2$$ 20 deg 2 requires at least three detectors of sensitivity within a factor of $$\\sim 2$$ ∼ 2 of each other and with a broad frequency bandwidth. When all detectors, including KAGRA and the third LIGO detector in India, reach design sensitivity, a significant fraction of gravitational-wave signals will be localized to a few square degrees by gravitational-wave observations alone.
Publisher: BMJ
Date: 02-2007
DOI: 10.1136/EBM.12.1.2-A
Publisher: Springer Science and Business Media LLC
Date: 04-07-2013
DOI: 10.1038/JHH.2013.54
Abstract: Although self-monitoring of blood pressure is common among people with hypertension, little is known about how general practitioners (GPs) use such readings. This survey aimed to ascertain current views and practice on self-monitoring of UK primary care physicians. An internet-based survey of UK GPs was undertaken using a provider of internet services to UK doctors. The hyperlink to the survey was opened by 928 doctors, and 625 (67%) GPs completed the questionnaire. Of them, 557 (90%) reported having patients who self-monitor, 191 (34%) had a monitor that they lend to patients, 171 (31%) provided training in self-monitoring for their patients and 52 (9%) offered training to other GPs. Three hundred and sixty-seven GPs (66%) recommended at least two readings per day, and 416 (75%) recommended at least 4 days of monitoring at a time. One hundred and eighty (32%) adjusted self-monitored readings to take account of lower pressures in out-of-office settings, and 10/5 mm Hg was the most common adjustment factor used. Self-monitoring of blood pressure was widespread among the patients of responding GPs. Although the majority used appropriate schedules of measurement, some GPs suggested much more frequent home measurements than usual. Further, interpretation of home blood pressure was suboptimal, with only a minority recognising that values for diagnosis and on-treatment target are lower than those for clinic measurement. Subsequent national guidance may improve this situation but will require adequate implementation.
Publisher: Springer Science and Business Media LLC
Date: 22-08-2005
Abstract: The Fenofibrate Intervention and Event Lowering in Diabetes (FIELD) Study is examining the effects of long-term fibrate therapy on coronary heart disease (CHD) event rates in patients with diabetes mellitus. This article describes the trial's run-in phase and patients' baseline characteristics. FIELD is a double-blind, placebo-controlled trial in 63 centres in 3 countries evaluating the effects of fenofibrate versus placebo on CHD morbidity and mortality in 9795 patients with type 2 diabetes mellitus. Patients were to have no indication for lipid-lowering therapy on randomization, but could start these or other drugs at any time after randomization. Follow-up in the study was to be for a median duration of not less than 5 years and until 500 major coronary events (fatal coronary heart disease plus nonfatal myocardial infarction) had occurred. About 2100 patients (22%) had some manifestation of cardiovascular disease (CVD) at baseline and thus high risk status. Less than 25% of patients without CVD had a (UKPDS determined) calculated 5-year CHD risk of %, but nearly all had a 5-year stroke risk of %. Despite this, half of the cohort were obese (BMI 30), most were men, two-thirds were aged over 60 years, and substantial proportions had NCEP ATP III features of the metabolic syndrome independent of their diabetes, including low HDL (60%), high blood pressure measurement or treatment for hypertension (84%), high waist measurement (68%), and raised triglycerides (52%). After a 6-week run-in period before randomisation with all participants receiving 200 mg comicronized fenofibrate, there were declines in total and LDL cholesterol (10%) and triglycerides (26%) and an increase in HDL cholesterol (6.5%). The study will show the effect of PPAR-alpha agonist action on CHD and other vascular outcomes in patients with type 2 diabetes including substantial numbers with low to moderate CVD risk but with the various components of the metabolic syndrome. The main results of the study will be reported in late 2005.
Publisher: AMPCo
Date: 22-05-2019
DOI: 10.5694/MJA2.50197
Publisher: Elsevier BV
Date: 02-2023
Publisher: Elsevier BV
Date: 11-1992
DOI: 10.1016/0895-4356(92)90166-K
Abstract: Randomized controlled trials are usually analysed by the group to which the patient was randomized, i.e. by "intention-to-treat", regardless of the degree of compliance. However, the "explanatory" effect, i.e. the effect that would occur if we had 100% compliance, is often of interest. This "explanatory" effect is diluted by poor compliance, and hence meta-analyses should ideally avoid both the heterogeneity of effect due to variation in compliance rates among studies, and the undeserved weight given to trials with poor compliance. Newcombe's deattenuation method, which adjusts estimates for the degree of compliance, is extended and applied to a meta-analysis of the five reported randomized controlled trials of mammographic screening. Compliance with screening varied across studies: from 61 to 93% assigned to screening had one or more mammograms. The adjusted estimate of the reduction in breast cancer mortality at 9 years follow-up is 0.37 (95% confidence interval: 0.21, 0.49).
Publisher: Springer Science and Business Media LLC
Date: 22-07-2112
Publisher: American Diabetes Association
Date: 13-10-2012
DOI: 10.2337/DC12-0306
Abstract: Although low HDL cholesterol (HDL-C) is an established risk factor for atherosclerosis, data on HDL-C and the risk of microvascular disease are limited. We tested the association between HDL-C and microvascular disease in a cohort of patients with type 2 diabetes. A total of 11,140 patients with type 2 diabetes and at least one additional vascular risk factor were followed a median of 5 years. Cox proportional hazards models were used to assess the association between baseline HDL-C and the development of new or worsening microvascular disease, defined prospectively as a composite of renal and retinal events. The mean baseline HDL-C level was 1.3 mmol/L (SD 0.45 mmol/L [range 0.1–4.0]). During follow-up, 32% of patients developed new or worsening microvascular disease, with 28% experiencing a renal event and 6% a retinal event. Compared with patients in the highest third, those in the lowest third had a 17% higher risk of microvascular disease (adjusted hazard ratio 1.17 [95% CI 1.06–1.28], P = 0.001) after adjustment for potential confounders and regression dilution. This was driven by a 19% higher risk of renal events (1.19 [1.08–1.32], P = 0.0005). There was no association between thirds of HDL-C and retinal events (1.01 [0.82–1.25], P = 0.9). In patients with type 2 diabetes, HDL-C level is an independent risk factor for the development of microvascular disease affecting the kidney but not the retina.
Publisher: Public Library of Science (PLoS)
Date: 08-04-2011
Publisher: Elsevier BV
Date: 08-2022
DOI: 10.1016/J.JCLINEPI.2022.04.022
Abstract: Methods to quantify overdiagnosis of screen detected cancer have been developed, but methods for quantifying overdiagnosis of noncancer conditions (whether symptomatic or asymptomatic) have been lacking. We aimed to develop a methodological framework for quantifying overdiagnosis that may be used for asymptomatic or symptomatic conditions and used gestational diabetes mellitus as an ex le of how it may be applied. We identify two earlier definitions for overdiagnosis, a narrower prognosis-based definition and a wider utility-based definition. Building on the central importance of the concepts of prognostic information and clinical utility of a diagnosis, we consider the following questions: within a target population, do people found to have a disease using one diagnostic strategy but found not to have the disease using another diagnostic strategy (so called 'additional diagnoses'), have an increased risk of adverse clinical outcomes without treatment (prognosis evidence), and/or a decreased risk of adverse outcomes with treatment (utility evidence)? Using Causal Directed Acyclic Graphs and fair umpires, we illuminate the relationships between diagnostics strategies and the frequency of overdiagnosis. We then use the ex le of gestational diabetes mellitus to demonstrate how the Fair Umpire framework may be applied to estimate overdiagnosis. Our framework may be used to quantify overdiagnosis in noncancer conditions (and in cancer conditions) and to guide further studies on this topic.
Publisher: BMJ
Date: 24-06-2009
DOI: 10.1136/EBN.12.3.71
Publisher: AMPCo
Date: 03-2002
Publisher: BMJ
Date: 30-03-2011
Publisher: Elsevier BV
Date: 2013
Publisher: John Wiley & Sons, Ltd
Date: 07-10-2009
Publisher: John Wiley & Sons, Ltd
Date: 18-10-2006
Publisher: Springer Science and Business Media LLC
Date: 13-04-2010
Publisher: Springer Science and Business Media LLC
Date: 20-08-2015
Publisher: AMPCo
Date: 11-1997
Publisher: JMIR Publications Inc.
Date: 05-05-2014
DOI: 10.2196/JMIR.3190
Publisher: Elsevier BV
Date: 05-2012
Abstract: To identify the optimal interval for repeat prostate-specific antigen (PSA) testing to screen for prostate cancer in healthy adults. A retrospective cohort study was conducted on 7332 healthy males without prostate cancer at baseline from 2005 to 2008. Participants underwent annual health checkups including PSA testing at the Center for Preventive Medicine in Japan. Participants with high PSA (≥ 4.0 ng/ml) underwent further examination for prostate cancer. A subgroup analysis was conducted age group (<50 years, ≥ 50 years). Mean age was 50 years. Mean PSA at baseline was 1.2 ng/ml. In over 50-year group, for those with initial PSA of <1.0, 1.0-1.9, 2.0-2.9, and 3.0-3.9 ng/ml at baseline, the 3-year cumulative incidence of prostate cancer was 0%, 0.1%, 0.3%, and 5.7%, respectively. No prostate cancer was identified in those 50 years with PSA of 3.0-3.9 ng/ml at baseline should undergo rescreening at 2 years. For men with PSA <3.0 ng/ml, PSA rescreening at intervals of ≥ 3 years is appropriate. PSA screening may not be indicated in males of <50 years of age.
Publisher: John Wiley & Sons, Ltd
Date: 07-10-2009
Publisher: Royal College of General Practitioners
Date: 06-2013
Publisher: American College of Physicians
Date: 17-05-2016
Publisher: BMJ
Date: 14-06-2009
Abstract: To estimate the long-term true change variation ('signal') and short-term within-person variation ('noise') of the different lipid measures and evaluate the best measure and the optimal interval for lipid re-screening. Retrospective cohort study from 2005 to 2008. A medical health check-up programme at a centre for preventive medicine in a teaching hospital in Tokyo, Japan. 15 810 apparently healthy Japanese adults not taking cholesterol-lowering drugs at baseline, with a mean body mass index of 22.5 kg/m(2) (SD 3.2). Annual measurement of the serum total cholesterol (TC), low-density lipoprotein (LDL) cholesterol, high-density lipoprotein (HDL) cholesterol, and calculation of the ratio of TC/HDL and LDL/HDL. Measurement of the ratio of long-term true change variation ('signal') to the short-term within-person variation ('noise') for each measure. At baseline, participants (53% male) with a mean age of 49 years (range 21-92) and a mean TC level of 5.3 mmol/l (SD 0.9 mmol/l) had annual check-ups over 4 years. Short-term within-person variations of TC, LDL, HDL, TC/HDL, and LDL/HDL were 0.12 (coefficient of variation (CV) 6.4%), 0.08 (CV 9.4%), 0.02 (CV 8.0%) mmol(2)/l(2), 0.08 (CV 7.9%) and 0.05 (CV 10.6%), respectively. The ratio of signal-to-noise at 3 years was largest for TC/HDL (1.6), followed by LDL/HDL (1.5), LDL (0.99), TC (0.8) and HDL (0.7), suggesting that cholesterol ratios are more sensitive re-screening measures. The signal-to-noise ratios of standard single lipid measures (TC, LDL and HDL) are weak over 3 years and decisions based on these measures are potentially misleading. The ratios, TC/HDL and LDL/HDL, seem to be better measures for monitoring assessments. The lipid re-screening interval should be >3 years for those not taking cholesterol-lowering drugs.
Publisher: Elsevier BV
Date: 03-2020
Publisher: SAGE Publications
Date: 05-2009
Publisher: BMJ
Date: 18-08-2005
Publisher: Springer Science and Business Media LLC
Date: 18-06-2019
Publisher: Wiley
Date: 31-05-2013
Publisher: BMJ
Date: 20-04-2002
Publisher: AMPCo
Date: 22-04-2021
DOI: 10.5694/MJA2.51037
Publisher: Public Library of Science (PLoS)
Date: 05-04-2023
DOI: 10.1371/JOURNAL.PONE.0284168
Abstract: Half the US population uses drugs with anticholinergic properties. Their potential harms may outweigh their benefits. Amitriptyline is among the most frequently prescribed anticholinergic medicinal products, is used for multiple indications, and rated as strongly anticholinergic. Our objective was to explore and quantify (anticholinergic) adverse drug reactions (ADRs) in patients taking amitriptyline vs. placebo in randomized controlled trials (RCTs) involving adults and healthy in iduals. We searched electronic databases from their inception until 09/2022, and clinical trial registries from their inception until 09/2022. We also performed manual reference searches. Two independent reviewers selected RCTs with ≥100 participants of ≥18 years, that compared amitriptyline (taken orally) versus placebo for all indications. No language restrictions were applied. One reviewer extracted study data, ADRs, and assessed study quality, which two others verified. The primary outcome was frequency of anticholinergic ADRs as a binary outcome (absolute number of patients with/without anticholinergic ADRs) in amitriptyline vs. placebo groups. Twenty-three RCTs (mean dosage 5mg to 300mg amitriptyline/day) and 4217 patients (mean age 40.3 years) were included. The most frequently reported anticholinergic ADRs were dry mouth, drowsiness, somnolence, sedation, fatigue, constitutional, and unspecific anticholinergic ADRs. Random-effects meta-analyses showed anticholinergic ADRs had a higher odd’s ratio for amitriptyline versus placebo ( OR = 7.41 [95% CI, 4.54 to 12.12]). Non-anticholinergic ADRs were as frequent for amitriptyline as placebo. Meta-regression analysis showed anticholinergic ADRs were not dose-dependent. The large OR in our analysis shows that ADRs indicative of anticholinergic activities can be attributed to amitriptyline. The low average age of participants in our study may limit the generalizability of the frequency of anticholinergic ADRs in older patients. A lack of dose-dependency may reflect limited reporting of the daily dosage when the ADRs occurred. The exclusion of small studies ( participants) decreased heterogeneity between studies, but may also have reduced our ability to detect rare events. Future studies should focus on older people, as they are more susceptible to anticholinergic ADRs. PROSPERO: CRD42020111970 .
Publisher: BMJ
Date: 30-03-2015
DOI: 10.1136/BMJ.H1566
Publisher: Springer Science and Business Media LLC
Date: 08-2013
DOI: 10.1038/500395A
Publisher: Ovid Technologies (Wolters Kluwer Health)
Date: 2005
DOI: 10.1097/00045391-200501000-00012
Abstract: To assess the impact of in idualized medication effectiveness tests (IMETs, or n-of-1 trials), on patients' short-term decision making about medications for chronic pain. Survey evaluation of patients undergoing a double-blind, crossover comparison of drug versus placebo, drug versus drug, or drug versus drug combination using paracetamol and ibuprofen in 3 pairs of treatment periods, randomized within pairs. General practice patients (supplemented by a few from 2 tertiary pain clinics) with either chronic pain (> or =3 months), or osteoarthritis (with pain for > or =1 month) severe enough to warrant consideration of long-term nonsteroidal antiinflammatory drug (NSAID) use but for whom there was doubt about the efficacy of NSAID or alternative. Pain and stiffness in sites nominated by the patient, global pain, use of escape analgesia, and side effects. Of 116 IMETs started, 71 were completed. Drug management changed for 46 of 71 (65%). The most common change was to add paracetamol or to substitute the NSAID or COX-2 inhibitor with paracetamol (25 of 71 patients and 54% of changes). Of the 37 who were using NSAIDs or COX-2 inhibitors before the IMET, 12 (32%) ceased these afterward. Paracetamol was as effective or more effective than ibuprofen in 37 (68%) of the 54 IMETs directly comparing these drugs. IMETs provide useful information for clinical decisions. Paracetamol continues to be useful for patients with chronic pain whose optimal drug choice is in doubt. Our results provide a new (in idual) perspective on the well-known recommendation for paracetamol as first-line treatment for chronic pain and demonstrate that it is feasible to provide IMETs nationally by mail and telephone.
Publisher: BMJ
Date: 02-2006
DOI: 10.1136/EBM.11.1.7
Publisher: American College of Physicians
Date: 16-09-2008
Publisher: Springer Science and Business Media LLC
Date: 04-02-2019
Publisher: Massachusetts Medical Society
Date: 09-10-2014
Publisher: Ovid Technologies (Wolters Kluwer Health)
Date: 2004
Publisher: BMJ
Date: 20-09-2007
Publisher: JMIR Publications Inc.
Date: 30-07-2020
DOI: 10.2196/17447
Abstract: The ubiquity of smartphones and health apps make them a potential self-management tool for patients that could be prescribed by medical professionals. However, little is known about how Australian general practitioners and their patients view the possibility of prescribing mobile health (mHealth) apps as a nondrug intervention. This study aimed to determine barriers and facilitators to prescribing mHealth apps in Australian general practice from the perspective of general practitioners and their patients. We conducted semistructured interviews in Australian general practice settings with purposively s led general practitioners and patients. The audio-recorded interviews were transcribed, coded, and thematically analyzed by two researchers. Interview participants included 20 general practitioners and 15 adult patients. General practitioners’ perceived barriers to prescribing apps included a generational difference in the digital propensity for providers and patients lack of knowledge of prescribable apps and trustworthy sources to access them the time commitment required of providers and patients to learn and use the apps and concerns about privacy, safety, and trustworthiness of health apps. General practitioners perceived facilitators as trustworthy sources to access prescribable apps and information, and younger generation and widespread smartphone ownership. For patients, the main barriers were older age and usability of mHealth apps. Patients were not concerned about privacy and data safety issues regarding health app use. Facilitators for patients included the ubiquity of smartphones and apps, especially for the younger generation and recommendation of apps by doctors. We identified evidence of effectiveness as an independent theme from both the provider and patient perspectives. mHealth app prescription appears to be feasible in general practice. The barriers and facilitators identified by the providers and patients overlapped, though privacy was of less concern to patients. The involvement of health professionals and patients is vital for the successful integration of effective, evidence-based mHealth apps with clinical practice.
Publisher: Ovid Technologies (Wolters Kluwer Health)
Date: 05-2005
Publisher: Springer Science and Business Media LLC
Date: 12-11-2019
DOI: 10.1186/S12888-019-2337-7
Abstract: Widening definitions of health conditions have the potential to affect millions of people and should only occur when there is strong evidence of benefit. In the last version of the Diagnostic and Statistical Manual of Mental Disorders (DSM), the DSM-5 Committee changed the Attention Deficit Hyperactivity Disorder (ADHD) age of onset criterion in two ways: raising the age of symptom onset and removing the requirement for symptoms to cause impairment. Given concerns about ADHD prevalence and treatment rates, we aimed to evaluate the evidence available to support these changes using a recently developed Checklist for Modifying Disease Definitions. We identified and analysed research informing changes to the DSM-IV-TR ADHD age of onset criterion. We compared this evidence to the evidence recommended in the Checklist for Modifying Disease Definitions. The changes to the DSM-IV-TR age of onset criterion were based on a literature review (publicly available as a 2 page document with online table of included studies), which we appraised as at high risk of bias. Estimates of the change in ADHD prevalence resulting from change to the age of onset criterion were based on a single study that included only a small number of children with ADHD ( n = 68) and only assessed the impact of change to the age component of the criterion. No evidence was used by, or available to the Committee regarding the impact on prevalence of removal of the requirement for impairment, or the effect of the criterion changes on diagnostic precision, the prognosis of, or the potential benefits or harms for in iduals diagnosed by the new, but not old criterion. The changes to the age of onset criterion were based on minimal research evidence that suffered from either high risk of bias or poor applicability. The minimal documentation available makes it difficult to judge the rigor of the process behind the criterion changes. Use of the Checklist for Modifying Disease Definitions would assist future proposed modifications of the DSM ADHD criteria, provide guidance on the studies needed to inform potential changes and would improve the transparency and documentation of the process.
Publisher: Springer Science and Business Media LLC
Date: 25-04-2018
Publisher: Massachusetts Medical Society
Date: 13-02-2003
DOI: 10.1056/NEJMOA021716
Publisher: Wiley
Date: 21-04-2015
DOI: 10.1002/IJC.29538
Publisher: AMPCo
Date: 1989
DOI: 10.5694/J.1326-5377.1989.TB136344.X
Abstract: Distance running performance is a viable model of human locomotion. To evaluate the physiologic strain during competitions ranging from 5-100 km, we evaluated heart rate (HR) records of competitive runners (n = 211). We found evidence that: 1) physiologic strain (% of maximum HR (%HRmax)) increased in proportional manner relative to distance completed, and was regulated by variations in running pace 2) the %HRmax achieved decreased with relative distance 3) slower runners had similar %HRmax response within a racing distance compared to faster runners, and despite differences in pace, the profile of %HRmax during a race was very similar in runners of differing ability and 4) in cases where there was a discontinuity in the running performance, there was evidence that physiologic effort was maintained for some time even after the pace had decreased. The overall results suggest that athletes are actively regulating their relative physiologic strain during competition, although there is evidence of poor regulation in the case of competitive failures.
Publisher: American Society of Clinical Oncology (ASCO)
Date: 20-10-2016
Abstract: Differentiated thyroid cancer (DTC) incidence has been reported to have increased three- to 15-fold in the past few decades. It is unclear whether this represents overdiagnosis or a true increase in incidence. Therefore, the current study aimed to estimate the prevalence of incidental DTC in published autopsy series and determine whether this prevalence has been increasing over time. PubMed, Embase, and Web of Science were searched from inception to December 2015 for relevant studies. Two authors searched for all autopsy studies that had included patients with no known history of thyroid pathology and reported the prevalence of incidental DTC (iDTC). Two authors independently extracted the data, and discrepancies were resolved by another author. The pooled prevalence of iDTC was assessed using a fixed-effects meta-analysis model with robust error variance. The time effect was studied using an inverse-variance weighted logit-linear regression model with robust error variance and a time variable. Thirty-five studies, conducted between 1949 and 2007, met the inclusion criteria and contributed 42 data sets and 12,834 autopsies. The prevalence of iDTC among the partial and whole examination subgroups was 4.1% (95% CI, 3.0% to 5.4%) and 11.2% (95% CI, 6.7% to 16.1%), respectively. Once the intensiveness of thyroid examination was accounted for in the regression model, the prevalence odds ratio stabilized from 1970 onward, and no time effect was observed. The current study confirms that iDTC is common, but the observed increasing incidence is not mirrored by prevalence within autopsy studies and, therefore, is unlikely to reflect a true population-level increase in tumorigenesis. This strongly suggests that the current increasing incidence of iDTC most likely reflects diagnostic detection increasing over time.
Publisher: BMJ
Date: 05-2023
DOI: 10.1136/BMJOPEN-2023-072248
Abstract: Consistent evidence shows pathology services are overused worldwide and that about one-third of testing is unnecessary. Audit and feedback (AF) is effective for improving care but few trials evaluating AF to reduce pathology test requesting in primary care have been conducted. The aim of this trial is to estimate the effectiveness of AF for reducing requests for commonly overused pathology test combinations by high-requesting Australian general practitioners (GPs) compared with no intervention control. A secondary aim is to evaluate which forms of AF are most effective. This is a factorial cluster randomised trial conducted in Australian general practice. It uses routinely collected Medicare Benefits Schedule data to identify the study population, apply eligibility criteria, generate the interventions and analyse outcomes. On 12 May 2022, all eligible GPs were simultaneously randomised to either no intervention control or to one of eight intervention groups. GPs allocated to an intervention group received in idualised AF on their rate of requesting of pathology test combinations compared with their GP peers. Three separate elements of the AF intervention will be evaluated when outcome data become available on 11 August 2023: (1) invitation to participate in continuing professional development-accredited education on appropriate pathology requesting, (2) provision of cost information on pathology test combinations and (3) format of feedback. The primary outcome is the overall rate of requesting of any of the displayed combinations of pathology tests of GPs over 6 months following intervention delivery. With 3371 clusters, assuming no interaction and similar effects for each intervention, we anticipate over 95% power to detect a difference of 4.4 requests in the mean rate of pathology test combination requests between the control and intervention groups. Ethics approval was received from the Bond University Human Research Ethics Committee (#JH03507 approved 30 November 2021). The results of this study will be published in a peer-reviewed journal and presented at conferences. Reporting will adhere to Consolidated Standards of Reporting Trials. ACTRN12622000566730.
Publisher: American Physical Society (APS)
Date: 20-11-2019
Publisher: Elsevier BV
Date: 02-2006
Publisher: Springer Science and Business Media LLC
Date: 06-2006
Publisher: No publisher found
Date: 2011
Publisher: John Wiley & Sons, Ltd
Date: 18-10-2006
Publisher: Elsevier BV
Date: 04-2022
DOI: 10.1016/J.CANEP.2021.102093
Abstract: Population trends in PSA testing and prostate cancer incidence do not perfectly correspond. We aimed to better understand relationships between trends in PSA testing, prostate cancer incidence and mortality in Australia and factors that influence them. We calculated and described standardised time trends in PSA tests, prostate biopsies, treatment of benign prostatic hypertrophy (BPH) and prostate cancer incidence and mortality in Australia in men aged 45-74, 75-84, and 85 + years. PSA testing increased from its introduction in 1989 to a peak in 2008 before declining in men aged 45-84 years. Prostate biopsies and cancer incidence fell from 1995 to 2000 in parallel with decrease in trans-urethral resections of the prostate (TURP) and, latterly, changes in pharmaceutical management of BPH. After 2000, changes in biopsies and incidence paralleled changes in PSA screening in men 45-84 years, while in men ≥85 years biopsy rates stabilised, and incidence fell. Prostate cancer mortality in men aged 45-74 years remained low throughout. Mortality in men 75-84 years gradually increased until mid 1990s, then gradually decreased. Mortality in men ≥ 85 years increased until mid 1990s, then stabilised. Age specific prostate cancer incidence largely mirrors PSA testing rates. Most deviation from this pattern may be explained by less use of TURP in management of BPH and consequent less incidental cancer detection in TURP tissue specimens. Mortality from prostate cancer initially rose and then fell below what it was when PSA testing began. Its initial rise and fall may be explained by a possible initial tendency to over-attribute deaths of uncertain cause in older men with a diagnosis of prostate cancer to prostate cancer. Decreases in mortality rates were many fold smaller than the increases in incidence, suggesting substantial overdiagnosis of prostate cancer after introduction of PSA testing.
Publisher: Elsevier BV
Date: 06-1995
Publisher: American Diabetes Association
Date: 21-06-2010
DOI: 10.2337/DC10-0588
Abstract: To evaluate the optimal interval for rechecking A1C levels below the diagnostic threshold of 6.5% for healthy adults. This was a retrospective cohort study. Participants were 16,313 apparently healthy Japanese adults not taking glucose-lowering medications at baseline. Annual A1C measures from 2005 to 2008 at the Center for Preventive Medicine, a community teaching hospital in Japan, estimated cumulative incidence of diabetes. Mean age (±SD) of participants was 49.7 ± 12.3 years, and 53% were male. Mean A1C at baseline was 5.4 ± 0.5%. At 3 years, for those with A1C at baseline of & .0%, 5.0–5.4%, 5.5–5.9%, and 6.0–6.4%, cumulative incidence (95% CI) was 0.05% (0.001–0.3), 0.05% (0.01–0.11), 1.2% (0.9–1.6), and 20% (18–23), respectively. In those with an A1C & .0%, rescreening at intervals shorter than 3 years identifies few in iduals (∼≤1%) with an A1C ≥6.5%.
Publisher: Public Library of Science (PLoS)
Date: 23-01-2023
DOI: 10.1371/JOURNAL.PONE.0280907
Abstract: Anticholinergic burden has been associated with adverse outcomes such as falls. To date, no gold standard measure has been identified to assess anticholinergic burden, and no conclusion has been drawn on which of the different measure algorithms best predicts falls in older patients from general practice. This study compared the ability of five measures of anticholinergic burden to predict falls. To account for patients’ in idual susceptibility to medications, the added predictive value of typical anticholinergic symptoms was further quantified in this context. To predict falls, models were developed and validated based on logistic regression models created using data from two German cluster-randomized controlled trials. The outcome was defined as “≥ 1 fall” vs. “no fall” within a 6-month follow-up period. Data from the RIME study ( n = 1,197) were used in model development, and from PRIMUM ( n = 502) for external validation. The models were developed step-wise in order to quantify the predictive ability of anticholinergic burden measures, and anticholinergic symptoms. In the development set, 1,015 patients had complete data and 188 (18.5%) experienced ≥ 1 fall within the 6-month follow-up period. The overall predictive value of the five anticholinergic measures was limited, with neither the employed anticholinergic variable (binary / count / burden), nor dose-dependent or dose-independent measures differing significantly in their ability to predict falls. The highest c -statistic was obtained using the German Anticholinergic Burden Score (0.73), whereby the optimism-corrected c -statistic was 0.71 after interval validation using bootstrapping and 0.63 in the external validation. Previous falls and dizziness / vertigo had the strongest prognostic value in all models. The ability of anticholinergic burden measures to predict falls does not appear to differ significantly, and the added value they contribute to risk classification in fall-prediction models is limited. Previous falls and dizziness / vertigo contributed most to model performance.
Publisher: Wiley
Date: 03-2022
Abstract: To identify challenges faced by Australian hospital healthcare staff during the COVID‐19 pandemic. We conducted an online survey (30 June–15 August 2020) of healthcare staff from Australian emergency and infectious disease departments. Participants were contacted via professional organisations and asked about preparedness, personal protective equipment (PPE), information flow, patient care, infection concerns, workload and mental health. We calculated the proportion of answers to yes/no and Likert‐style questions free‐text responses were analysed thematically. Respondents ( n = 162) were 23–67 years old, 98% worked in EDs, 68% were female, 87% from Queensland, and most worked as nurses (46%) or specialists (31%). Respondents felt their workplace was prepared for the pandemic (79%), had sufficient information about PPE (83%) none were sent home because of PPE shortages. Eighty‐five percent received sufficient information from official bodies and 50% were aware of the National COVID‐19 Clinical Evidence Taskforce guidelines. Most (83%) had sufficient information to provide optimal patient care, but 24% experienced unfair/abusive patient behaviour. Most (76%) were concerned about becoming infected by patients, 67% about infecting patients, and 78% about infecting someone at home. Workload decreased for 82% but 42% looked after more patients. Fifty‐seven percent experienced additional work‐related stress: 60% reporting experiencing anxiety and 53% experiencing burnout, with 36% and 46% continuing to experience these, respectively. Key challenges included: emotional, workplace/organisational, family/loved ones and PPE factors. The Australian system provided sufficient information and PPE. Staff experienced considerable stress, infection concerns and emotional challenges, which merit consideration in preparing for the future.
Publisher: BMJ
Date: 02-2008
DOI: 10.1136/EBM.13.1.3
Publisher: Wiley
Date: 19-01-2022
DOI: 10.5694/MJA2.51388
Publisher: JMIR Publications Inc.
Date: 14-06-2023
Abstract: elehealth (the provision of healthcare via telephone or video) has been used for healthcare delivery for decades, but the COVID-19 pandemic greatly accelerated the uptake of telehealth in many care settings globally. Given the now widespread use of telehealth and the predominance of telephone over video consultation, it is important to compare the effectiveness and acceptability of telehealth delivered via telephone to video. o identify and synthesise randomised controlled trials, which compares synchronous telehealth consultations delivered by telephone versus video. ubMed (MEDLINE), Embase, and CENTRAL via the Cochrane Library were searched from inception until 10 Feb 2023 for randomised controlled trials. Forward and backward citation searches were conducted on included randomised controlled trials. Cochrane Risk of Bias-2 tool was used to assess the quality of the studies. ixteen randomised controlled trials – 10 in the United States, 3 in the UK, 2 in Canada, 1 in Australia involving 1719 participants were included in the qualitative and quantitative analyses. Most of the telehealth interventions were for hospital-based outpatient follow ups, monitoring, and rehabilitation (n = 13). The 3 studies that were conducted in the community all studied smoking cessation. In half of the studies, nurses delivered the care (n=8). Almost all included studies had high or unclear risk of bias, mainly due to bias in the randomization process and selection of reported results. The trials found no substantial differences between telephone and video telehealth consultations on clinical effectiveness, patient satisfaction, and healthcare use (cost effectiveness) outcomes. None of the studies reported on patient safety or adverse events. We did not find any study on telehealth interventions for diagnosis, initiating new treatment, or were set in primary care. ased on small set of erse trials, we found no important differences between telephone and video consultations for management of patients with established diagnosis. rotocol was registered on Open Science Framework osf.io/74wxf
Publisher: Wiley
Date: 21-05-2002
DOI: 10.1002/SIM.1183
Abstract: What causes heterogeneity in systematic reviews of controlled trials? First, it may be an artefact of the summary measures used, of study design features such as duration of follow-up or the reliability of outcome measures. Second, it may be due to real variation in the treatment effect and hence provides the opportunity to identify factors that may modify the impact of treatment. These factors may include features of the population such as: severity of illness, age and gender intervention factors such as dose, timing or duration of treatment and comparator factors such as the control group treatment or the co-interventions in both groups. The ideal way to study causes of true variation is within rather than between studies. In most situations however, we will have to make do with a study level investigation and hence need to be careful about adjusting for potential confounding by artefactual factors such as study design features. Such investigation of artefactual and true causes of heterogeneity form essential steps in moving from a combined effect estimate to application to particular populations and in iduals.
Publisher: Ovid Technologies (Wolters Kluwer Health)
Date: 03-2017
Publisher: Ovid Technologies (Wolters Kluwer Health)
Date: 08-2022
DOI: 10.2215/CJN.00180122
Abstract: Hyperkalemia after starting renin-angiotensin system inhibitors has been shown to be subsequently associated with a higher risk of cardiovascular and kidney outcomes. However, whether to continue or discontinue the drug after hyperkalemia remains unclear. Data came from the Action in Diabetes and Vascular Disease: Preterax and Diamicron Modified Release Controlled Evaluation (ADVANCE) trial, which included a run-in period where all participants initiated angiotensin-converting enzyme inhibitor–based therapy (a fixed combination of perindopril and indapamide). The study population was taken as patients with type 2 diabetes with normokalemia (serum potassium of 3.5 to .0 mEq/L) at the start of run-in. Potassium was remeasured 3 weeks later when a total of 9694 participants were classified into hyperkalemia (≥5.0 mEq/L), normokalemia, and hypokalemia ( .5 mEq/L) groups. After run-in, patients were randomized to continuation of the angiotensin-converting enzyme inhibitor–based therapy or placebo major macrovascular, microvascular, and mortality outcomes were analyzed using Cox regression during the following 4.4 years (median). During active run-in, 556 (6%) participants experienced hyperkalemia. During follow-up, 1505 participants experienced the primary composite outcome of major macrovascular and microvascular events. Randomized treatment of angiotensin-converting enzyme inhibitor–based therapy significantly decreased the risk of the primary outcome (38.1 versus 42.0 per 1000 person-years hazard ratio, 0.91 95% confidence interval, 0.83 to 1.00 P =0.04) compared with placebo. The magnitude of effects did not differ across subgroups defined by short-term changes in serum potassium during run-in ( P for heterogeneity =0.66). Similar consistent treatment effects were also observed for all-cause death, cardiovascular death, major coronary events, major cerebrovascular events, and new or worsening nephropathy ( P for heterogeneity ≥0.27). Continuation of angiotensin-converting enzyme inhibitor–based therapy consistently decreased the subsequent risk of clinical outcomes, including cardiovascular and kidney outcomes and death, regardless of short-term changes in serum potassium. Action in Diabetes and Vascular Disease: Preterax and Diamicron Modified Release Controlled Evaluation (ADVANCE), NCT00145925
Publisher: Ovid Technologies (Wolters Kluwer Health)
Date: 06-2008
DOI: 10.1111/J.1572-0241.2008.01875.X
Abstract: Reducing mortality from colorectal cancer (CRC) may be achieved by the introduction of population-based screening programs. The aim of the systematic review was to update previous research to determine whether screening for CRC using the fecal occult blood test (FOBT) reduces CRC mortality and to consider the benefits, harms, and potential consequences of screening. We searched eight electronic databases (Cochrane Library, MEDLINE, EMBASE, CINAHL, PsychINFO, AMED, SIGLE, and HMIC). We identified nine articles describing four randomized controlled trials (RCTs) involving over 320,000 participants with follow-up ranging from 8 to 18 yr. The primary analyses used intention to screen and a secondary analysis adjusted for nonattendance. We calculated the relative risks and risk differences for each trial, and then overall, using fixed and random effects models. Combined results from the four eligible RCTs indicated that screening had a 16% reduction in the relative risk (RR) of CRC mortality (RR 0.84, 95% confidence interval [CI] 0.78-0.90). There was a 15% RR reduction (RR 0.85, 95% CI 0.78-0.92) in CRC mortality for studies that used biennial screening. When adjusted for screening attendance in the in idual studies, there was a 25% RR reduction (RR 0.75, 95% CI 0.66-0.84) for those attending at least one round of screening using the FOBT. There was no difference in all-cause mortality (RR 1.00, 95% CI 0.99-1.02) or all-cause mortality excluding CRC (RR 1.01, 95% CI 1.00-1.03). The present review includes seven new publications and unpublished data concerning CRC screening using FOBT. This review confirms previous research demonstrating that FOBT screening reduces the risk of CRC mortality. The results also indicate that there is no difference in all-cause mortality between the screened and nonscreened populations.
Publisher: BMJ
Date: 07-08-1999
Publisher: SAGE Publications
Date: 08-1994
Publisher: American Astronomical Society
Date: 16-10-2017
Publisher: Oxford University Press (OUP)
Date: 04-1995
Publisher: BMJ
Date: 04-01-2003
Abstract: To improve the accuracy and completeness of reporting of studies of diagnostic accuracy, to allow readers to assess the potential for bias in a study, and to evaluate a study's generalisability. The Standards for Reporting of Diagnostic Accuracy (STARD) steering committee searched the literature to identify publications on the appropriate conduct and reporting of diagnostic studies and extracted potential items into an extensive list. Researchers, editors, and members of professional organisations shortened this list during a two day consensus meeting, with the goal of developing a checklist and a generic flow diagram for studies of diagnostic accuracy. The search for published guidelines about diagnostic research yielded 33 previously published checklists, from which we extracted a list of 75 potential items. At the consensus meeting, participants shortened the list to a 25 item checklist, by using evidence, whenever available. A prototype of a flow diagram provides information about the method of patient recruitment, the order of test execution, and the numbers of patients undergoing the test under evaluation and the reference standard, or both. Evaluation of research depends on complete and accurate reporting. If medical journals adopt the STARD checklist and flow diagram, the quality of reporting of studies of diagnostic accuracy should improve to the advantage of clinicians, researchers, reviewers, journals, and the public.
Publisher: Elsevier BV
Date: 2021
Publisher: Wiley
Date: 15-06-1998
DOI: 10.1002/(SICI)1097-0258(19980615)17:11<1215::AID-SIM844>3.0.CO;2-Y
Abstract: One way of examining trade-offs between quantity and quality of life (QOL) is to combine them into a single measure such as quality-adjusted life year (QALY). If censoring occurs, then estimation presents some difficulties. One approach, known as Q-TWiST, is to define a series of health states, use a 'partitioned' survival analysis to calculate the average time in each state, and then weight each state according to its quality of life to calculate QALYs. Such health-state models, however, are unhelpful when the transitions between health states are unclear or if they do not adequately reflect variations in quality of life. We therefore examine an alternative analysis to be used when repeated measures of quality of life are available from in idual patients in a clinical trial. The method proceeds by separating quality of life and survival, that is, dQALY/dt = S(t)Q(t), where S(t) is the survival curve, estimated from the standard Kaplan-Meier method, and Q(t) is the quality of life function, derived from in idual repeated measures of quality of life. We derive single health-state (QALY) and multiple health-state (Q-TWiST) models and illustrate the approach by comparing different durations of adjuvant chemotherapy for breast cancer.
Publisher: Wiley
Date: 2008
Publisher: Wiley
Date: 08-09-2021
DOI: 10.5694/MJA2.51250
Publisher: Springer Science and Business Media LLC
Date: 14-01-2014
Publisher: Center for Open Science
Date: 08-07-2019
Abstract: Systematic review and meta-analysis are powerful tools to provide an unbiased overview of all available literature addressing a specific research question. However, systematic reviews are resource-intensive. To address this, the development of automation tools to aid systematic review research is increasing. But despite the development of these automation tools, recent research suggests that uptake of these tools is slow among evidence synthesis researchers and are potential barriers to using automation tools which include: steep learning curve, mismatched workflow, and lack of support. Here we propose a set of standards for automation tools and platforms that have been built to aid the systematic review community. The aim of these standards is to improve the integration of different tools into the research process and to increase transparency in the field of automation tools for evidence synthesis. The technical standards set out a minimum level and format of documentation required for publishing and disseminating automation tools. Further, we present an orchestrator platform, the Integration Interface, a system to bring compliant automation tools together, independent of programming language, into a succinct workflow. The Integration Interface aims to reduce the barriers associated with using a single or multiple automation tools in the evidence synthesis research process.
Publisher: Springer Science and Business Media LLC
Date: 16-02-2009
Publisher: BMJ
Date: 10-2006
DOI: 10.1136/EBM.11.5.133
Publisher: Oxford University Press
Date: 05-2010
DOI: 10.1093/MED/9780199204854.003.020301
Abstract: You must always be students, learning and unlearning till your life’s end. Joseph Lister Neither our memories nor our textbooks are complete and up to date with all the research relevant to the patients we will see today. The scattering of necessary research across a vast ocean of literature makes it inaccessible at the point of clinical decision. The consequences for patient care have given rise to the discipline of evidence-based medicine (EBM), whose two central concerns are with the quality of research evidence and with its appropriate usage in clinical care....
Publisher: Elsevier BV
Date: 06-2006
DOI: 10.1016/J.AHJ.2005.07.014
Abstract: We compared cost-effectiveness of pravastatin in a placebo-controlled trial in 5500 younger (31-64 years) and 3514 older patients (65-74 years) with previous acute coronary syndromes. Hospitalizations and long-term medication within the 6 years of the trial were estimated in all patients . Drug dosage, nursing home, and ambulatory care costs were estimated from substudies. Incremental costs per life saved of pravastatin relative to placebo were estimated from treatment effects and resource use. Over 6 years, pravastatin reduced all-cause mortality by 4.3% in the older patients and by 2.3% in the younger patients. Older patients assigned pravastatin had marginally lower cost of pravastatin and other medication over 6 years (A dollar 4442 vs A dollar 4637), but greater cost offsets (A dollar 2061 vs A dollar 897) from lower rates of hospitalizations. The incremental cost per life saved with pravastatin was A dollar 55500 in the old and A dollar 167200 in the young. Assuming no treatment effect beyond the study period, the life expectancy to age 82 years of additional survivors was 9.1 years in the older and 17.3 years in the younger. Estimated additional life-years saved from pravastatin therapy were 0.39 years for older and 0.40 years for younger patients. Incremental costs per life-year saved were A dollar 7581 in the older and A dollar 14944 in the younger, if discounted at 5% per annum. Pravastatin therapy was more cost-effective among older than younger patients, because of their higher baseline risk and greater cost offsets, despite their shorter life expectancy.
Publisher: Elsevier BV
Date: 09-2018
DOI: 10.1016/J.IJANTIMICAG.2018.04.005
Abstract: Large quantities of antimicrobials are given to food animals, particularly in feed, potentially increasing antimicrobial resistance in humans. However, the magnitude of this effect is unclear. We searched PubMed, Embase and Web of Science for studies on interventions that limited antimicrobial use in food animals, in any setting and context, to reduce antimicrobial resistance 1) in those food animals and 2) in humans. We validated our strategy by testing whether it identified known relevant studies. Data from included studies were extracted into pre-designed and pilot-tested forms. We included 104 articles containing 93 studies. Heterogeneity (different animal species, environs, antimicrobial classes, interventions, administration routes, s ling, and methods), was considerable, precluding meta-analysis. The evidence was therefore synthesised narratively. A total of 89 studies (3 directly, 86 indirectly) addressed whether limiting antimicrobial exposure in food animals led to decreased antimicrobial resistance in those animals. The evidence was adequate to conclude this, although the magnitude of the effect could not be quantified. Four studies (1 directly, 3 indirectly) examined whether withdrawal of antibiotics changed resistance of potential pathogens in retail food for human consumption, and in bacteria of humans themselves. The direct (observational) study of broiler hatchery in ovo antimicrobial injection found a credible effect in terms of size reduction and time sequences. Limiting antimicrobial use in food animals reduces antimicrobial resistance in food animals, and probably reduces antimicrobial resistance in humans. The magnitude of the effect cannot be quantified.
Publisher: SAGE Publications
Date: 13-12-2011
Abstract: We review controversies associated with randomized controlled trials (RCTs) stopped early for apparent benefit (truncated RCTs or tRCTs) and present our groups’ perspective. Long-established theory, simulations and recent empirical evidence demonstrate that tRCTs will on average overestimate treatment effects, and this overestimation may be large, particularly when tRCTs have small number of events. Theoretical considerations and simulations demonstrate that on average, meta-analyses of RCTs with appropriate stopping rules will lead to only trivial overestimation of treatment effects. However, tRCTs will disproportionally contribute to meta-analytic estimates when tRCTs occur early in the sequence of trials with few subsequent studies, publication of nontruncated RCTs is delayed, there is publication bias, or tRCTs result in a ‘freezing’ effect in which ‘correcting’ trials are never undertaken. To avoid applying overestimates of effect to clinical decision-making, clinicians should view the results of in idual tRCTs with small s le sizes and small number of events with skepticism. Pooled effects from meta-analyses including tRCTs are likely to overestimate effect when there is a substantial difference in effect estimates between the tRCTs and the nontruncated RCTs, and in which the tRCTs have a substantial weight in the meta-analysis despite themselves having a relatively small number of events. Such circumstances call for sensitivity analyses omitting tRCTs.
Publisher: Cambridge University Press (CUP)
Date: 2000
DOI: 10.1017/S0266462300161124
Abstract: Objective: To adjust patients' time trade-off (TTO) scores using information on their utility functions for survival time to derive a measure of health state utility equivalent to the standard gamble (SG). Methods: A s le of 199 cardiovascular patients were asked three TTO and SG questions (to assess their own health state), and three certainty equivalent questions (to assess their utility function for survival time) in an interview. Results: Patients' utility functions for time were increasingly concave, but being unable to model this successfully, a constant function with an averaged level of concavity was used. The raw TTO scores were significantly higher than SG scores, while the adjusted TTO scores were equivalent to the SG. Conclusions: Raw time trade-off scores will give biased estimates of health state utility when patients' utility functions for time are not linear, but these can be adjusted to yield true utilities. The constant proportional risk-posture assumption of the conventional QALY model, on which previous attempts to adjust time trade-offs have been based, was not supported by the data.
Publisher: Ovid Technologies (Wolters Kluwer Health)
Date: 08-2009
DOI: 10.1161/HYPERTENSIONAHA.109.133041
Abstract: The relative importance of various blood pressure indices on cardiovascular risk in people with type 2 diabetes mellitus has not been established. This study compares the strengths of the associations between different baseline blood pressure variables (systolic blood pressure [SBP], diastolic blood pressure [DBP], pulse pressure [PP], and mean arterial pressure) and the 4.3-year risk of major cardiovascular events in the Action in Diabetes and Vascular Disease: Preterax and Diamicron-Modified Release Controlled Evaluation Study. Mean (SD) age for the 11 140 participants was 65.8 years (6.4 years). During follow-up, 1000 major cardiovascular events, 559 major coronary events, and 468 cardiovascular deaths were recorded. After adjustment for age, sex, and treatment allocation, the hazard ratios (95% CIs) associated with 1 increment in SD for the risk of major cardiovascular events were 1.17 (1.10 to 1.24) for SBP 1.20 (1.13 to 1.28) for PP 1.12 (1.05 to 1.19) for mean arterial pressure and 1.04 (0.98 to 1.11) for DBP. The areas under the receiver operating characteristic curve were slightly higher for SBP and PP compared with mean arterial pressure and DBP for major cardiovascular and coronary events. Using achieved instead of baseline blood pressure values marginally improved the effect estimates for SBP, DBP, and mean arterial pressure, with no significant differences in the areas under the receiver operating characteristic curve between models with SBP and those with PP. In conclusion, SBP and PP are the 2 best and DBP is the least effective determinant of the risk of major cardiovascular outcomes in the relatively old patients with type 2 diabetes mellitus participating in the Action in Diabetes and Vascular Disease: Preterax and Diamicron-Modified Release Controlled Evaluation Study. However, SBP may be the simplest and most useful predictor across a wider range of age groups and populations.
Publisher: Annals of Family Medicine
Date: 03-10-2023
DOI: 10.1370/AFM.3029
Publisher: SAGE Publications
Date: 08-2018
Publisher: Center for Open Science
Date: 17-09-2019
Abstract: The primary goal of research is to advance knowledge. For that knowledge to benefit research and society, it must be trustworthy. Trustworthy research is robust, rigorous and transparent at all stages of design, execution and reporting. Initiatives such as the San Francisco Declaration on Research Assessment (DORA) and the Leiden Manifesto have led the way bringing much needed global attention to the importance of taking a considered, transparent and broad approach to assessing research quality. Since publication in 2012 the DORA principles have been signed up to by over 1500 organizations and nearly 15,000 in iduals. Despite this significant progress, assessment of researchers still rarely includes considerations related to trustworthiness, rigor and transparency. We have developed the Hong Kong Principles (HKPs) as part of the 6th World Conference on Research Integrity with a specific focus on the need to drive research improvement through ensuring that researchers are explicitly recognized and rewarded (i.e., their careers are advanced) for behavior that leads to trustworthy research. The HKP have been developed with the idea that their implementation could assist in how researchers are assessed for career advancement with a view to strengthen research integrity. We present five principles: responsible research practices transparent reporting open science (open research) valuing a ersity of types of research and recognizing all contributions to research and scholarly activity. For each principle we provide a rationale for its inclusion and provide ex les where these principles are already being adopted.
Publisher: Elsevier BV
Date: 07-2017
Publisher: Royal College of General Practitioners
Date: 06-2008
Publisher: AMPCo
Date: 08-2001
Publisher: Ovid Technologies (Wolters Kluwer Health)
Date: 2015
DOI: 10.1161/HYPERTENSIONAHA.114.04421
Abstract: Blood pressure–lowering treatment reduces cardiovascular risk in patients with diabetes mellitus, but the effect varies between in iduals. We sought to identify which patients benefit most from such treatment in a large clinical trial in type 2 diabetes mellitus. In Action in Diabetes and Vascular Disease: Preterax and Diamicron MR Controlled Evaluation (ADVANCE) participants (n=11 140), we estimated the in idual patient 5-year absolute risk of major adverse cardiovascular events with and without treatment by perindopril–indapamide (4/1.25 mg). The difference between treated and untreated risk is the estimated in idual patient’s absolute risk reduction (ARR). Predictions were based on a Cox proportional hazards model inclusive of demographic and clinical characteristics together with the observed relative treatment effect. The group-level effect of selectively treating patients with an estimated ARR above a range of decision thresholds was compared with treating everyone or those with a blood pressure /90 mm Hg using net benefit analysis. In ADVANCE, there was wide variation in treatment effects across in idual patients. According to the algorithm, 43% of patients had a large predicted 5-year ARR of ≥1% (number-needed-to-treat [NNT 5 ] ≤100) and 40% had an intermediate predicted ARR of 0.5% to 1% (NNT 5 =100–`200). The proportion of patients with a small ARR of ≤0.5% (NNT 5 ≥200) was 17%. Provided that one is prepared to treat at most 200 patients for 5 years to prevent 1 adverse outcome, prediction-based treatment yielded the highest net benefit. In conclusion, a multivariable treatment algorithm can identify those in iduals who benefit most from blood pressure–lowering therapy in terms of ARR of major adverse cardiovascular events and may be used to guide treatment decisions in in idual patients with diabetes. URL: www.clinicaltrials.gov . Unique identifier: NCT00145925.
Publisher: Wiley
Date: 23-04-2003
DOI: 10.1046/J.1467-789X.2003.00099.X
Abstract: Dietary fat intake has been blamed for the increase in adiposity and has led to a worldwide effort to decrease the amount of fat in the diet. However, the comparative efficacy of this approach is debatable. Whilst short-term dietary intervention studies show that low-fat diets lead to weight loss in both healthy and overweight in iduals, it is less clear if a reduction in fat intake is more efficacious than other dietary restrictions in the long term. The purpose of this systematic review was to determine the effectiveness of low-fat diets in achieving sustained weight loss when used for the express purpose of weight loss in obese or overweight people. A comprehensive search identified six studies that fulfilled our criteria for inclusion (randomized controlled trial, participants either overweight or obese, comparison of a low-fat diet with another type of weight-reducing diet, follow-up period that was at least 6 months in duration and inclusion of participants 18 years or older without serious disease). There were a total of 594 participants in the six trials. The duration of the intervention varied from 3 to 18 months with follow-up from 6 to 18 months. There were no significant differences between low-fat diets and other weight-reducing diets in terms of sustained weight loss. Furthermore, the overall weight loss at the 12-18-month follow-up in all studies was very small (2-4 kg). In overweight or obese in iduals who are dieting for the purpose of weight reduction, low-fat diets are as efficacious as other weight-reducing diets for achieving sustained weight loss, but not more so.
Publisher: CSIRO Publishing
Date: 2016
DOI: 10.1071/PY14180
Abstract: Equity of access and reducing health inequities are key objectives of comprehensive primary health care. However, the supports required to target equity are fragile and vulnerable to changes in the fiscal and political environment. Six Australian primary healthcare services, five in South Australia and one in the Northern Territory, were followed over 5 years (2009–2013) of considerable change. Fifty-five interviews were conducted with service managers, staff, regional health executives and health department representatives in 2013 to examine how the changes had affected their practice regarding equity of access and responding to health inequity. At the four state government services, seven of 10 previously identified strategies for equity of access and services’ scope to facilitate access to other health services and to act on the social determinants of health inequity were now compromised or reduced in some way as a result of the changing policy environment. There was a mix of positive and negative changes at the non-government organisation. The community-controlled service increased their breadth of strategies used to address health equity. These different trajectories suggest the value of community governance, and highlight the need to monitor equity performance and advocate for the importance of health equity.
Publisher: SAGE Publications
Date: 02-1991
Publisher: Springer Science and Business Media LLC
Date: 08-2018
Publisher: BMJ
Date: 08-04-2019
Publisher: American College of Physicians
Date: 18-06-2013
Publisher: Ovid Technologies (Wolters Kluwer Health)
Date: 03-2015
DOI: 10.1161/CIRCOUTCOMES.114.001381
Abstract: Complete reporting of all components of complex interventions is essential for translation of research evidence into clinical practice. Previous work has highlighted deficiencies in the reporting of nonpharmacological interventions however, the reporting quality of exercise-based interventions for coronary heart disease has not been examined. A systematic search strategy was used to identify randomized controlled trials of exercise-based cardiac rehabilitation published until December 2013. Fifty-seven trials were included, reporting on 74 interventions. Intervention description completeness was assessed using the Template for Intervention Description and Replication checklist. Missing intervention details were then sought from additional published material and also by emailing corresponding authors. Only 6 interventions (8%) sufficiently described all required items within the main publication this increased to 11 (15%) after searching for additional published material and 32 (43%) after contacting trial authors. Although location/setting and duration were consistently well reported in publications, complete descriptions of the exercise schedule, as well as details about its tailoring and progression, were missing for over half of interventions (complete for 42% and 36% of interventions, respectively). Although some authors (25/61) were able to provide missing intervention details when contacted, others could not be located (20) or did not respond (16). Inadequate reporting of cardiac rehabilitation interventions is a substantial problem, with essential information frequently missing, and for almost half of all interventions, unobtainable after publication. A conscientious effort to address this problem could facilitate an improvement in the quality of cardiac rehabilitation delivered in clinical practice.
Publisher: Ovid Technologies (Wolters Kluwer Health)
Date: 10-2011
Publisher: Wiley
Date: 23-08-2023
DOI: 10.1111/ADD.16328
Publisher: Oxford University Press (OUP)
Date: 09-2008
Abstract: This paper gives a practical account of why and how to learn to practise evidence based medicine while still in clinical training. It highlights practical benefits to learning the skills (such as passing exams, coping with information overload and helping patients), and explains how to manage each of the four essential steps (asking questions, acquiring information, appraising evidence, and applying the results). Key resources to give the trainee rapid access to evidence based answers are highlighted, as are efficient ways of keeping up to date with the emerging literature.
Publisher: BMJ
Date: 16-03-2002
Publisher: BMJ
Date: 03-04-2013
DOI: 10.1136/BMJ.F1895
Abstract: To estimate the probability of becoming high risk for cardiovascular disease among people at low and intermediate risk and not being treated for high blood pressure or lipid levels. Observational study. General communities in Japan and the United States. 13,757 participants of the Tokyo health check-up study and 3855 of the Framingham studies aged 30-74 years with complete data on risk equation covariates, not receiving blood pressure or cholesterol lowering treatment, and with an estimated risk of cardiovascular disease <20% within 10 years. We stratified participants on the basis of baseline risk: <5%, 5-<10%, 10-<15%, and 15- 20% using the Framingham equation. At baseline most participants had <5% risk (60.6% of Tokyo cohort and 45.7% of Framingham cohort) or 5-<10% risk (24.0% and 28.0%, respectively) of a cardiovascular event within 10 years. There was <10% probability of crossing the treatment threshold at 19, 8, and 3 years for baseline risk groups <5%, 5-<10%, and 10- 10% probability of crossing the treatment threshold at one year for the 15-<20% baseline risk group. Decisions on the frequency of remeasuring for cardiovascular risk should be made on the basis of baseline risk. Repeat risk estimation before 8-10 years is not warranted for most people initially not requiring treatment. However, remeasurement within a year seems warranted in those with an initial 15-<20% risk.
Publisher: BMJ
Date: 17-11-2021
DOI: 10.1136/BMJ.N2729
Publisher: BMJ
Date: 04-05-2006
Publisher: Elsevier BV
Date: 11-1997
DOI: 10.1016/S0895-4356(97)00122-4
Abstract: Two dichotomous screening tests are often compared by performing both tests in a s led population, and submitting positive results on either test to verification by the reference standard. Unbiased estimates of the true positive and false positive rates of each test cannot be estimated directly. However, unbiased estimates of the relative true positive and relative false positive rates may be obtained. When one test has a higher true positive rate at the expense of a higher false positive rate, the trade-off is represented by the ratio of extra false positives detected to extra true positives detected. A 95% confidence interval for this ratio is derived. This ratio is prevalence dependent and only applies to the s led population. For target populations of different prevalence, estimates of the ratio may be obtained if one of the following applies: (i) the test characteristics of one test are known (ii) the relative prevalence is known and (iii) certain assumptions are made.
Publisher: AMPCo
Date: 04-2001
DOI: 10.5694/J.1326-5377.2001.TB143313.X
Abstract: General practitioners wanting to practise evidence-based medicine (EBM) are constrained by time factors and the great ersity of clinical problems they deal with. They need experience in knowing what questions to ask, in locating and evaluating the evidence, and in applying it. Conventional searching for the best evidence can be achieved in daily general practice. Sometimes the search can be performed during the consultation, but more often it can be done later and the patient can return for the "result". Case-based journal clubs provide a supportive environment for GPs to work together to find the best evidence at regular meetings. An evidence-based literature search service is being piloted to enhance decision-making for in idual patients. A central facility provides the search and interprets the evidence in relation to in idual cases. A request form and a "results" format make the service akin to pathology testing or imaging. Using EBM in general practice appears feasible. Major difficulties still exist before it can be practised by all GPs, but it has the potential to change the way doctors update their knowledge.
Publisher: Springer Science and Business Media LLC
Date: 06-1997
Publisher: Elsevier BV
Date: 08-2003
DOI: 10.1016/S0009-9260(03)00258-7
Abstract: To improve the accuracy and completeness of reporting of studies of diagnostic accuracy in order to allow readers to assess the potential for bias in a study and to evaluate the general isability of its results. The standards for reporting of diagnostic accuracy (STARD) steering committee searched the literature to identify publications on the appropriate conduct and reporting of diagnostic studies and extracted potential items into an extensive list. Researchers, editors, and members of professional organisations shortened this list during a 2 day consensus meeting with the goal of developing a checklist and a generic flow diagram for studies of diagnostic accuracy. The search for published guidelines about diagnostic research yielded 33 previously published checklists, from which we extracted a list of 75 potential items. At the consensus meeting, participants shortened the list to a 25-item checklist, by using evidence whenever available. A prototype of a flow diagram provides information about the method of recruitment of patients, the order of test execution and the numbers of patients undergoing the test under evaluation, the reference standard, or both. Evaluation of research depends on complete and accurate reporting. If medical journals adopt the checklist and the flow diagram, the quality of reporting of studies of diagnostic accuracy should improve to the advantage of clinicians, researchers, reviewers, journals, and the public.
Publisher: BMJ
Date: 12-2008
DOI: 10.1136/EBM.13.6.164
Publisher: Wiley
Date: 02-11-2018
DOI: 10.1111/MEDU.13410
Abstract: Complete reporting of intervention details in trials of evidence-based practice (EBP) educational interventions is essential to enable clinical educators to translate research evidence about interventions that have been shown to be effective into practice. In turn, this will improve the quality of EBP education. This study was designed to examine the completeness of reporting of EBP educational interventions in published studies and to assess whether missing details of educational interventions could be retrieved by searching additional sources and contacting study authors. A systematic review of controlled trials that had evaluated EBP educational interventions was conducted using a citation analysis technique. Forward and backward citations of the index articles were tracked until March 2016. The TIDieR (template for intervention description and replication) checklist was used to assess the completeness of intervention reporting. Missing details were sought from: (i) the original publication (ii) additional publicly available sources, and (iii) the study authors. Eighty-three articles were included 45 (54%) were randomised controlled trials (RCTs) and 38 (46%) were non-RCTs. The majority of trials (n = 62, 75%) involved medical professionals. None of the studies completely reported all of the main items of the educational intervention within the original publication or in additional sources. However, details became complete for 17 (20%) interventions after contact with the respective authors. The item most frequently missing was 'intervention materials', which was missing in 80 (96%) of the original publications, in additional sources for 77 (93%) interventions, and in 59 (71%) studies after contact with the authors. Authors of 69 studies were contacted 33 provided the details requested. The reporting of EBP educational interventions is incomplete and remained so for the majority of studies, even after study authors had been contacted for missing information. Collaborative efforts involving authors and editors are required to improve the completeness of reporting of EBP educational interventions.
Publisher: Springer Science and Business Media LLC
Date: 17-09-2014
Publisher: American College of Physicians
Date: 17-03-2020
Publisher: BMJ
Date: 05-10-1996
Publisher: Wiley
Date: 12-02-2010
DOI: 10.1111/J.1753-6405.1993.TB00149.X
Abstract: Quality-adjusted life years or QALYs are used to combine, in a single measure, information about the quantity and quality of life produced by a health intervention. They have been used as outcome measures in clinical trials and in cost-effectiveness analyses. This paper describes how QALYs are assessed and how they are used. Methodological and theoretical problems are discussed as are ethical objections to the utilitarian ethos underlying their use. It is concluded that QALYs are part of a technology that is still in development but, because of the lack of alternatives, they will certainly continue to be used. It is important to resolve the outstanding methodological issues and reach an ethical consensus to ensure that QALYs truly reflect community goals.
Publisher: BMJ
Date: 12-02-2021
DOI: 10.1136/BMJ.N435
Publisher: JMIR Publications Inc.
Date: 04-11-2020
DOI: 10.2196/23081
Abstract: Timely and effective contact tracing is an essential public health measure for curbing the transmission of COVID-19. App-based contact tracing has the potential to optimize the resources of overstretched public health departments. However, its efficiency is dependent on widespread adoption. This study aimed to investigate the uptake of the Australian Government’s COVIDSafe app among Australians and examine the reasons why some Australians have not downloaded the app. An online national survey, with representative quotas for age and gender, was conducted between May 8 and May 11, 2020. Participants were excluded if they were a health care professional or had been tested for COVID-19. Of the 1802 potential participants contacted, 289 (16.0%) were excluded prior to completing the survey, 13 (0.7%) declined, and 1500 (83.2%) participated in the survey. Of the 1500 survey participants, 37.3% (n=560) had downloaded the COVIDSafe app, 18.7% (n=280) intended to do so, 27.7% (n=416) refused to do so, and 16.3% (n=244) were undecided. Equally proportioned reasons for not downloading the app included privacy (165/660, 25.0%) and technical concerns (159/660, 24.1%). Other reasons included the belief that social distancing was sufficient and the app was unnecessary (111/660, 16.8%), distrust in the government (73/660, 11.1%), and other miscellaneous responses (eg, apathy and following the decisions of others) (73/660, 11.1%). In addition, knowledge about COVIDSafe varied among participants, as some were confused about its purpose and capabilities. For the COVIDSafe app to be accepted by the public and used correctly, public health messages need to address the concerns of citizens, specifically privacy, data storage, and technical capabilities. Understanding the specific barriers preventing the uptake of contact tracing apps provides the opportunity to design targeted communication strategies aimed at strengthening public health initiatives, such as downloading and correctly using contact tracing apps.
Publisher: Wiley
Date: 29-03-2022
DOI: 10.5694/MJA2.51479
Publisher: National Institute for Health and Care Research
Date: 06-2012
DOI: 10.3310/HTA16290
Publisher: Wiley
Date: 30-05-1996
DOI: 10.1002/(SICI)1097-0258(19960530)15:10<969::AID-SIM211>3.0.CO;2-9
Abstract: In the present matched-cohort study, we investigated the efficacy of olanexidine gluconate in comparison with chlorhexidine-alcohol as an antiseptic agent in thoracic esophagectomy. A total of 372 patients with esophageal cancer who were scheduled to undergo thoracic esophagectomy between 2016 and 2018 were assigned to one of two groups based on the preoperative antiseptic agent used in thoracic esophagectomy. We investigated the incidence of surgical site infectious complications in the propensity-matched cohort. Based on the propensity score, 116 patients prepared with 1.5% olanexidine gluconate and 114 patients prepared with 1.0% chlorhexidine-alcohol as surgical skin antisepsis were selected. No significant intergroup differences were observed with respect to incisional surgical site infection (0.8% in the olanexidine group versus 0.8% in the chlorhexidine group) and deep fascial/organ space surgical site infection (1.7%/10.3% in the olanexidine group versus 3.5%/15.7% in the chlorhexidine group, p = 0.39 = 0.03). Notably, the respective incidences of surgical site infection except anastomotic leakage were 1.7% and 7.0% in the olanexidine and chlorhexidine groups (p = 0.04). Olanexidine gluconate was well tolerated and significantly reduced incidence of surgical site infection except anastomotic leakage in comparison with chlorhexidine-alcohol as an antiseptic agent in thoracic esophagectomy with three-field lymph node dissection.
Publisher: Springer Science and Business Media LLC
Date: 24-07-2023
DOI: 10.1186/S12916-023-02966-9
Abstract: Chronic disease management (CDM) through sustained knowledge translation (KT) interventions ensures long-term, high-quality care. We assessed implementation of KT interventions for supporting CDM and their efficacy when sustained in older adults. Design: Systematic review with meta-analysis engaging 17 knowledge users using integrated KT. Eligibility criteria: Randomized controlled trials (RCTs) including adults ( 65 years old) with chronic disease(s), their caregivers, health and/or policy-decision makers receiving a KT intervention to carry out a CDM intervention for at least 12 months (versus other KT interventions or usual care). Information sources: We searched MEDLINE, EMBASE, and the Cochrane Central Register of Controlled Trials from each database’s inception to March 2020. Outcome measures: Sustainability, fidelity, adherence of KT interventions for CDM practice, quality of life (QOL) and quality of care (QOC). Data extraction, risk of bias (ROB) assessment: We screened, abstracted and appraised articles (Effective Practice and Organisation of Care ROB tool) independently and in duplicate. Data synthesis: We performed both random-effects and fixed-effect meta-analyses and estimated mean differences (MDs) for continuous and odds ratios (ORs) for dichotomous data. We included 158 RCTs (973,074 participants [961,745 patients, 5540 caregivers, 5789 providers]) and 39 companion reports comprising 329 KT interventions, involving patients (43.2%), healthcare providers (20.7%) or both (10.9%). We identified 16 studies described as assessing sustainability in 8.1% interventions, 67 studies as assessing adherence in 35.6% interventions and 20 studies as assessing fidelity in 8.7% of the interventions. Most meta-analyses suggested that KT interventions improved QOL, but imprecisely (36 item Short-Form mental [SF-36 mental]: MD 1.11, 95% confidence interval [CI] [− 1.25, 3.47], 14 RCTs, 5876 participants, I 2 = 96% European QOL-5 dimensions: MD 0.01, 95% CI [− 0.01, 0.02], 15 RCTs, 6628 participants, I 2 = 25% St George’s Respiratory Questionnaire: MD − 2.12, 95% CI [− 3.72, − 0.51] 44 12 RCTs, 2893 participants, I 2 = 44%). KT interventions improved QOC (OR 1.55, 95% CI [1.29, 1.85], 12 RCTS, 5271 participants, I 2 = 21%). KT intervention sustainability was infrequently defined and assessed. Sustained KT interventions have the potential to improve QOL and QOC in older adults with CDM. However, their overall efficacy remains uncertain and it varies by effect modifiers, including intervention type, chronic disease number, comorbidities, and participant age. PROSPERO CRD42018084810.
Publisher: Wiley
Date: 12-02-2010
DOI: 10.1111/J.1753-6405.1993.TB00103.X
Abstract: The purpose of this research was to estimate the cost-effectiveness of mammographic screening to supplement the results of the National Evaluation of Breast Cancer Screening which identified the mortality benefit as the most sensitive parameter. This appraisal used a different computer model, MISCAN, which models the effects of introducing a national screening program into a previously unscreened population, rather than basing estimates on the assumption of a fully established program. For the 40 to 49 age group a mortality reduction of 8 per cent was assumed, rather than the 30 per cent estimate utilised in the National Evaluation. The revised estimate is based on the two Swedish trials (Malmo and WE). New estimates for treatment costs were also incorporated into the MISCAN model. The cost-effectiveness of the policy recommended in the National Evaluation Report, $11,000 per life year saved with two-yearly screening of women over 40, is estimated by the MISCAN model to be $20,300. These differences arise partly from the difference in mortality effects for the 40 to 49 age group, but also from differences inherent in the steady-state and dynamic population approaches to modelling premature deaths averted. The MISCAN results confirm that screening for women over 50 is more cost-effective than screening women under 50. Screening all women aged 50 to 69 every two to three years is reasonable value for money. For women aged 40 to 49 the mortality benefit and cost-effectiveness is less clear, and it would be prudent to allow screening in this group until further evidence is available.
Publisher: Oxford University Press
Date: 28-07-2016
Publisher: Wiley
Date: 09-2012
Abstract: Diagnosis of heart failure in primary care is often inaccurate, and access to and use of echocardiography is suboptimal. This study aimed to develop and provisionally validate a clinical prediction rule to optimize referral for echocardiography of people identified in primary care with suspected heart failure. A systematic review identified studies of diagnosis of heart failure set in primary care. The in idual patient data for five of these studies were obtained. Logistic regression models to predict heart failure were developed on one of the data sets and validated on the others using area under the receiver operating characteristic curve (AUROC), and goodness-of-fit calibration plots. A model based upon four simple clinical features (Male, history of myocardial Infarction, Crepitations, Edema: MICE) and natriuretic peptide had good validity when applied to other data sets, with AUROCs between 0.84 and 0.93, and reasonable calibration. The rule performed well across the data sets, with sensitivity between 81% and 96% and specificity between 57% and 74%. A simple clinical rule based upon gender, history of myocardial infarction, presence of ankle oedema, and presence of basal lung crepitations can discriminate between people with suspected heart failure who should be referred straight for echocardiography and people for whom referral should depend upon the result of a natriuretic peptide test. Prospective validation and an implementation evaluation of the rule is now warranted.
Publisher: BMJ
Date: 04-2009
Publisher: BMJ
Date: 04-2200
Publisher: BMJ
Date: 05-2014
Publisher: BMJ
Date: 04-2008
Abstract: To determine the effectiveness and cost-effectiveness of height screening (of children aged 4 to 11) to identify height-related conditions. Systematic review and economic modelling. We included published and unpublished screening studies of any design, except case reports, conducted in any setting that measured children's height as part of a population-level assessment. Studies were identified by electronic database searches, contact with experts and from bibliographies of retrieved studies. Children aged between 4 and 11 years. Diagnostic yield of height-related conditions and change in quality of life, as measured by quality-adjusted life years (QALYs), for early versus late treatment of underlying conditions. Twelve studies described a height-screening programme and provided data on the diagnostic yield of newly diagnosed height-related conditions. Where reported, yield for growth-hormone deficiency (per 1000 children screened) ranged from 0.05 (1 in 20,000) to 0.62 (approximately 1 in 1500) and for Turner syndrome (per 1000 children screened) was between 0.02 (1 in 50,000) and 0.07 (approximately 1 in 14,000). As a secondary gain, children with other potentially treatable conditions were identified diagnostic yields ranged from 0.22 to 1.84 per 1000 children screened. Three studies did not detect any new cases, but all of these studies had methodological limitations. Economic modelling suggested that height screening is associated with health improvements and is cost effective for a willingness to pay threshold of pound 30,000 per QALY. This review indicates the utility and acceptable cost-effectiveness of height screening arising from increased detection of height-related disorders and secondary pick-up of other undiagnosed conditions. Further research is needed to obtain more reliable data on quality of life gains and costs associated with early interventions for height-related conditions. The exact role of height-screening programmes in improving child health remains to be determined.
Publisher: Georg Thieme Verlag KG
Date: 03-2016
Publisher: Oxford University Press (OUP)
Date: 28-03-2014
Location: United Kingdom of Great Britain and Northern Ireland
Start Date: 07-2020
End Date: 07-2024
Amount: $390,000.00
Funder: Australian Research Council
View Funded Activity