ORCID Profile
0000-0002-5594-9737
Current Organisations
Nara Institute of Science and Technology(NAIST)
,
Flinders University Flinders Health and Medical Research Institute
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
Publisher: Walter de Gruyter GmbH
Date: 06-2022
Abstract: One approach to assessing reference material (RM) commutability and agreement with clinical s les (CS) is to use ordinary least squares or Deming regression with prediction intervals. This approach assumes constant variance that may not be fulfilled by the measurement procedures. Flexible regression frameworks which relax this assumption, such as quantile regression or generalized additive models for location, scale, and shape (GAMLSS), have recently been implemented, which can model the changing variance with measurand concentration. We simulated four imprecision profiles, ranging from simple constant variance to complex mixtures of constant and proportional variance, and examined the effects on commutability assessment outcomes with above four regression frameworks and varying the number of CS, data transformations and RM location relative to CS concentration. Regression framework performance was determined by the proportion of false rejections of commutability from prediction intervals or centiles across relative RM concentrations and was compared with the expected nominal probability coverage. In simple variance profiles (constant or proportional variance), Deming regression, without or with logarithmic transformation respectively, is the most efficient approach. In mixed variance profiles, GAMLSS with smoothing techniques are more appropriate, with consideration given to increasing the number of CS and the relative location of RM. In the case where analytical coefficients of variation profiles are U-shaped, even the more flexible regression frameworks may not be entirely suitable. In commutability assessments, variance profiles of measurement procedures and location of RM in respect to clinical s le concentration significantly influence the false rejection rate of commutability.
Publisher: Elsevier BV
Date: 04-2017
DOI: 10.1016/J.CLINBIOCHEM.2016.11.025
Abstract: The clinical catchment area for the Metabolic service at the Women's and Children's Hospital in Adelaide, South Australia, covers nearly 2.5millionkm Multiple aliquots of plasma from remainder EDTA s les for haematological investigations were frozen. S les were then dispatched on dry ice to the laboratories being correlated. At an agreed date and time correlation s les were thawed and plasma ammonia measured. Passing-Bablok regression analysis showed slopes ranging from 1.00 to 1.10 and y-intercepts ranging from -10μmol/L to 1μmol/L. Despite the absence of a reference method or reference material and troublesome pre-analytical effects in ammonia measurement, plasma ammonia results from the different platforms in general compare well. The study also demonstrates that s les for ammonia measurement can be transported over great distances and still correlate well. Furthermore, a common reference interval for plasma ammonia may be a possibility.
Publisher: Elsevier BV
Date: 07-10-2005
DOI: 10.1016/J.CYTO.2005.06.010
Abstract: Intracytoplasmic detection of leucocyte cytokines has become a powerful tool for the characterisation of cytokine-producing cells in heterogeneous cell populations, however the effect of specimen storage conditions is unknown. The aim of this study was to determine the effect of whole blood stored at room temperature (RT) or 4 degrees C, on intracellular cytokine production by T cells and monocytes. In cell cultures stored at RT or 4 degrees C for 24h, significant changes in several leucocyte cytokines/chemokines were shown compared to blood cultures stimulated at time=0. There was a significant decrease in IL-2, IL-4 and TNFalpha production by CD4+ T cells in blood cultures stored at RT but an increase in IL-2 in cultures at 4 degrees C. There was a significant decrease in TGFbeta production by CD4+ and CD8+ T cells in cultures kept at RT or 4 degrees C. There was a significant increase in MCP-1 and MCP-3 production by monocytes in blood cultures kept at RT or 4 degrees C. There was a decrease in IL-12 production by monocytes in cultures kept at 4 degrees C, whereas IL-10 production was decreased at RT and increased in cultures kept at 4 degrees C. Blood stored at 4 degrees C showed less immunomodulatory changes than blood kept at RT although overall a possible Th1 bias at 4 degrees C.
Publisher: Annals of Laboratory Medicine
Date: 23-10-2024
Publisher: Oxford University Press (OUP)
Date: 27-06-2020
DOI: 10.1093/AJCP/AQAA063
Publisher: Oxford University Press (OUP)
Date: 11-07-2020
DOI: 10.1093/AJCP/AQAA087
Publisher: Wiley
Date: 07-07-2020
DOI: 10.1111/VOX.12969
Publisher: Informa UK Limited
Date: 15-12-2022
Publisher: Walter de Gruyter GmbH
Date: 03-02-2022
Abstract: Within-subject biological variation ( CV i ) is a fundamental aspect of laboratory medicine, from interpretation of serial results, partitioning of reference intervals and setting analytical performance specifications. Four indirect (data mining) approaches in determination of CV i were directly compared. Paired serial laboratory results for 5,000 patients was simulated using four parameters, d the percentage difference in the means between the pathological and non-pathological populations, CV i the within-subject coefficient of variation for non-pathological values, f the fraction of pathological values, and e the relative increase in CV i of the pathological distribution. These parameters resulted in a total of 128 permutations. Performance of the Expected Mean Squares method (EMS), the median method, a result ratio method with Tukey’s outlier exclusion method and a modified result ratio method with Tukey’s outlier exclusion were compared. Within the 128 permutations examined in this study, the EMS method performed the best with 101/128 permutations falling within ±0.20 fractional error of the ‘true’ simulated CV i , followed by the result ratio method with Tukey’s exclusion method for 78/128 permutations. The median method grossly under-estimated the CV i . The modified result ratio with Tukey’s rule performed best overall with 114/128 permutations within allowable error. This simulation study demonstrates that with careful selection of the statistical approach the influence of outliers from pathological populations can be minimised, and it is possible to recover CV i values close to the ‘true’ underlying non-pathological population. This finding provides further evidence for use of routine laboratory databases in derivation of biological variation components.
Publisher: Elsevier BV
Date: 04-2023
Publisher: Elsevier BV
Date: 05-2022
DOI: 10.1016/J.CLINBIOCHEM.2022.02.006
Abstract: Indirect reference intervals and biological variation studies heavily rely on statistical methods to separate pathological and non-pathological subpopulations within the same dataset. In recognition of this, we compare the performance of eight univariate statistical methods for identification and exclusion of values originating from pathological subpopulations. The eight approaches examined were: Tukey's rule with and without Box-Cox transformation median absolute deviation double median absolute deviation Gaussian mixture models van der Loo (Vdl) methods 1 and 2 and the Kosmic approach. Using four scenarios including lognormal distributions and varying the conditions through the number of pathological populations, central location, spread and proportion for a total of 256 simulated mixed populations. A performance criterion of ± 0.05 fractional error from the true underlying lower and upper reference interval was chosen. Overall, the Kosmic method was a standout with the highest number of scenarios lying within the acceptable error, followed by Vdl method 1 and Tukey's rule. Kosmic and Vdl method 1 appears to discriminate better the non-pathological reference population in the case of log-normal distributed data. When the proportion and spread of pathological subpopulations is high, the performance of statistical exclusion deteriorated considerably. It is important that laboratories use a priori defined clinical criteria to minimise the proportion of pathological subpopulation in a dataset prior to analysis. The curated dataset should then be carefully examined so that the appropriate statistical method can be applied.
Publisher: Elsevier BV
Date: 06-2014
Publisher: Elsevier BV
Date: 02-2021
Publisher: Walter de Gruyter GmbH
Date: 16-05-2022
Abstract: Detection of between-lot reagent bias is clinically important and can be assessed by application of regression-based statistics on several paired measurements obtained from the existing and new candidate lot. Here, the bias detection capability of six regression-based lot-to-lot reagent verification assessments, including an extension of the Bland–Altman with regression approach are compared. Least squares and Deming regression (in both weighted and unweighted forms), confidence ellipses and Bland–Altman with regression (BA-R) approaches were investigated. The numerical simulation included permutations of the following parameters: differing result range ratios (upper:lower measurement limits), levels of significance (alpha), constant and proportional biases, analytical coefficients of variation (CV), and numbers of replicates and s le sizes. The s le concentrations simulated were drawn from a uniformly distributed concentration range. At a low range ratio (1:10, CV 3%), the BA-R performed the best, albeit with a higher false rejection rate and closely followed by weighted regression approaches. At larger range ratios (1:1,000, CV 3%), the BA-R performed poorly and weighted regression approaches performed the best. At higher assay imprecision (CV 10%), all six approaches performed poorly with bias detection rates %. A lower alpha reduced the false rejection rate, while greater s le numbers and replicates improved bias detection. When performing reagent lot verification, laboratories need to finely balance the false rejection rate (selecting an appropriate alpha) with the power of bias detection (appropriate statistical approach to match assay performance characteristics) and operational considerations (number of clinical s les and replicates, not having alternate reagent lot).
Publisher: Walter de Gruyter GmbH
Date: 24-11-2022
Abstract: Lot-to-lot verification is an integral component for monitoring the long-term stability of a measurement procedure. The practice is challenged by the resource requirements as well as uncertainty surrounding experimental design and statistical analysis that is optimal for in idual laboratories, although guidance is becoming increasingly available. Collaborative verification efforts as well as application of patient-based monitoring are likely to further improve identification of any differences in performance in a relatively timely manner. Appropriate follow up actions of failed lot-to-lot verification is required and must balance potential disruptions to clinical services provided by the laboratory. Manufacturers need to increase transparency surrounding release criteria and work closer with laboratory professionals to ensure acceptable reagent lots are released to end users. A tripartite collaboration between regulatory bodies, manufacturers, and laboratory medicine professional bodies is key to developing a balanced system where regulatory, manufacturing, and clinical requirements of laboratory testing are met, to minimize differences between reagent lots and ensure patient safety. Clinical Chemistry and Laboratory Medicine has served as a fertile platform for advancing the discussion and practice of lot-to-lot verification in the past 60 years and will continue to be an advocate of this important topic for many more years to come.
Publisher: SAGE Publications
Date: 13-02-2020
Abstract: The interpretation of delta check rules in a panel of tests should be different to that at the single analyte level, as the number of hypothesis tests conducted (i.e. the number of delta check rules) is greater and needs to be taken into account. De-identified paediatric laboratory results were extracted, and the first two serial results for each patient were used for analysis. Analytes were grouped into four common laboratory test panels consisting of renal, liver, bone and full blood count panels. The sensitivities and specificities of delta check limits as discrete panel tests were assessed by random permutation of the original data-set to simulate a wrong blood in tube situation. Generally, as the number of analytes included in a panel increases, the delta check rules deteriorate considerably due to the increased number of false positives, i.e. increased number hypothesis tests performed. To reduce high false-positive rates, patient results may be rejected from autovalidation only if the number of analytes failing the delta check limits exceeds a certain threshold of the total number of analytes in the panel (N). Our study found that the use of the ([Formula: see text] rule) for panel results had a specificity % and sensitivity ranging from 25% to 45% across the four common laboratory panels. However, this did not achieve performance close to some analytes when considered in isolation. The simple [Formula: see text] rule reduces the false-positive rate and minimizes unnecessary, resource-intensive investigations for potentially erroneous results.
Publisher: Elsevier BV
Date: 11-2023
Publisher: Wiley
Date: 21-02-2019
DOI: 10.1111/BJH.15127
Publisher: Informa UK Limited
Date: 14-08-2020
Publisher: Wiley
Date: 24-10-2021
DOI: 10.1111/AJO.13260
Publisher: Annals of Laboratory Medicine
Date: 09-2022
Publisher: Walter de Gruyter GmbH
Date: 04-11-2022
Abstract: Method evaluation is one of the critical components of the quality system that ensures the ongoing quality of a clinical laboratory. As part of implementing new methods or reviewing best practices, the peer-reviewed published literature is often searched for guidance. From the outset, Clinical Chemistry and Laboratory Medicine ( CCLM ) has a rich history of publishing methods relevant to clinical laboratory medicine. An insight into submissions, from editors’ and reviewers’ experiences, shows that authors still struggle with method evaluation, particularly the appropriate requirements for validation in clinical laboratory medicine. Here, we consider through a series of discussion points an overview of the status, challenges, and needs of method evaluation from the perspective of clinical laboratory medicine. We identify six key high-level aspects of clinical laboratory method evaluation that potentially lead to inconsistency. 1. Standardisation of terminology, 2. Selection of analytical performance specifications, 3. Experimental design of method evaluation, 4. S le requirements of method evaluation, 5. Statistical assessment and interpretation of method evaluation data, and 6. Reporting of method evaluation data. Each of these areas requires considerable work to harmonise the practice of method evaluation in laboratory medicine, including more empirical studies to be incorporated into guidance documents that are relevant to clinical laboratories and are freely and widely available. To further close the loop, educational activities and fostering professional collaborations are essential to promote and improve the practice of method evaluation procedures.
Publisher: Elsevier BV
Date: 06-2020
Publisher: Elsevier BV
Date: 04-2022
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 04-2021
Publisher: Elsevier BV
Date: 05-2022
Publisher: Elsevier BV
Date: 10-2006
Publisher: Wiley
Date: 09-05-2003
DOI: 10.1046/J.1440-1843.2003.00459.X
Abstract: Infants with Bordetella pertussis infection (whooping cough) have an unexplained lymphocytosis and leucocytosis characterized by an increase in small lymphocytes with convoluted and cleaved nuclei. To characterize these cells immunophenotyping using multiparameter flow cytometry was performed on leucocytes from a group of 11 infants aged 3-6 months with proven pertussis and from uninfected control subjects. The panel of monoclonal antibodies used to elucidate leucocyte subtypes included activation, adhesion, costimulatory, memory, T-helper (Th) 1 and Th2 markers. Patients with pertussis showed an increase in absolute numbers of neutrophils, monocytes, T lymphocytes (both CD4 and CD8), B lymphocytes (including CD10+/CD19+ haematogones) and natural killer (NK) cells. All leucocyte subgroups showed a marked decrease in L-selectin (CD62L) expression. The expression of other adhesion molecules CD11a, CD44 and CD54 on all leucocyte subgroups was unchanged. Expression of costimulatory molecules, CD49D and CD28 on T cells and CD80 and CD86 on monocytes, was unchanged. Lymphocyte activation markers CD69, CD25 and HLA-DR were unchanged. There was an increase in CD45RA+/CD45RO+/CD4+ cells (activated) and CD62L-/CD45RO+/CD4+ cells (Th1-like) but no increase in CD7-/CD4+ T cells (Th2-like). L-Selectin expression mediates extravasation of leucocytes into tissues and is important for homing of peripheral blood lymphocytes to lymph nodes. The significant down-regulation of L-selectin on leucocytes in pertussis infection may prevent leucocyte migration to areas of infection and homing and adhesion of T and B cells to peripheral lymphoid tissues. The increase in lymphocytes with Th1 phenotype may be required for effective immune response to the infective organism. These data provide a possible explanation for the absolute leucocytosis observed in this disease.
Publisher: Walter de Gruyter GmbH
Date: 06-08-2021
Abstract: Multicentre international trials relying on diagnoses derived from biochemical results may overlook the importance of assay standardisation from the participating laboratories. Here we describe a study protocol aimed at harmonising results from total bile acid determinations within the context of an international randomised controlled Trial of two treatments, URsodeoxycholic acid and RIF icin, for women with severe early onset Intrahepatic Cholestasis of pregnancy (TURRIFIC), referred to as the Bile Acid Comparison and Harmonisation (BACH) study, with the aims of reducing inter-laboratory heterogeneity in total bile acid assays. We have simulated laboratory data to determine the feasibility of total bile acid recalibration using a reference set of patient s les with a consensus value approach and subsequently used regression-based techniques to transform the data. From these simulations, we have demonstrated that mathematical recalibration of total bile acid results is plausible, with a high probability of successfully harmonising results across participating laboratories. Standardisation of bile acid results facilitates the commutability of laboratory results and collation for statistical analysis. It may provide the momentum for broader application of the described techniques in the setting of large-scale multinational clinical trials dependent on results from non-standardised assays.
Publisher: Oxford University Press (OUP)
Date: 10-06-2021
DOI: 10.1093/AJCP/AQAB049
Abstract: We examined the false acceptance rate (FAR) and false rejection rate (FRR) of varying precision verification experimental designs. Analysis of variance was applied to derive the subcomponents of imprecision (ie, repeatability, between-run, between-day imprecision) for complex matrix experimental designs (day × run × replicate day × run). For simple nonmatrix designs (1 day × multiple replicates or multiday × 1 replicate), ordinary standard deviations were calculated. The FAR and FRR in these different scenarios were estimated. The FRR increased as more s les were included in the precision experiment. The application of an upper verification limit, which seeks to cap FRR at 5% for multiple experiments, significantly increased the FAR. The FRR decreases as the observed imprecision increases relative to the claimed imprecision and when a greater number of days, runs, or replicates are included in the verification design. Increasing the number or days, runs, or replicates also reduces the FAR for between-day imprecision and repeatability. Design of verification experiments should incorporate the local availability of resources and analytical expertise. The largest imprecision component should be targeted with a greater number of measurements. Consideration of both FAR and FRR should be given when committing a platform into service.
Publisher: Oxford University Press (OUP)
Date: 31-12-2019
DOI: 10.1093/AJCP/AQZ201
Abstract: Preanalytical processes in pediatric patients are generally manual and associated with a higher risk of error. The optimized delta check rules for detecting misidentified children s les are examined. Relative difference and absolute different delta check limits were applied on original and reshuffled (to simulate s le mislabeling/mix-up) paired deidentified pediatric results of 57 laboratory tests. The sensitivity, specificity, and accuracy of a range of delta check limits were determined. The delta check limit associated with the highest accuracy was considered optimal. In general, the delta check limits had poor to moderate accuracy (0.50-0.81) in detecting misidentified patient s les. The sensitivity (rule out misidentified s le) quickly deteriorated at increasing delta check limits. At the same time, the specificity (rule in misidentified s le) of the delta check limit was also low. The performance of the relative difference and absolute difference delta check rules was similar. Our findings showed poor delta check performance in the pediatric population. The high false-positive flag rate may lead to wasteful resource-intensive investigations and delay in result reporting. In addition, we observed that the optimized pediatric delta check correlated strongly with within-subject biologic variation, whereas delta check accuracy correlated poorly with index of in iduality.
Publisher: Walter de Gruyter GmbH
Date: 05-03-2021
Abstract: Reference intervals depend on the distribution of results within a reference population and can be influenced by subclinical disease. Functional reference limits present an opportunity to derive clinically relevant reference limits from routinely collected data sources, which consist of mixed populations of unhealthy and healthy groups. Serum ferritin is a good ex le of the utility of functional reference limits. Several studies have identified clinically relevant reference limits through examining the relationship between serum ferritin and erythrocyte parameters. These ferritin functional limits often represent the inflection point at which erythrocyte parameters change significantly. Comparison of ferritin functional reference limits with those based on population distributional reference limits reveals that the lower reference limit may fall below the point at which patients become clinically unwell. Functional reference limits may be considered for any biomarker that exhibits a correlated relationship with other biomarkers.
Publisher: Walter de Gruyter GmbH
Date: 13-10-2023
Publisher: Annals of Laboratory Medicine
Date: 09-2022
Publisher: Springer Science and Business Media LLC
Date: 12-01-2021
DOI: 10.1186/S12884-020-03481-Y
Abstract: Severe early onset (less than 34 weeks gestation) intrahepatic cholestasis of pregnancy (ICP) affects 0.1% of pregnant women in Australia and is associated with a 3-fold increased risk of stillbirth, fetal hypoxia and compromise, spontaneous preterm birth, as well as increased frequencies of pre-ecl sia and gestational diabetes. ICP is often familial and overlaps with other cholestatic disorders. Treatment options for ICP are not well established, although there are limited data to support the use of ursodeoxycholic acid (UDCA) to relieve pruritus, the main symptom. Rif icin, a widely used antibiotic including in pregnant women, is effective in reducing pruritus in non-pregnancy cholestasis and has been used as a supplement to UDCA in severe ICP. Many women with ICP are electively delivered preterm, although there are no randomised data to support this approach. We have initiated an international multicentre randomised clinical trial to compare the clinical efficacy of rif icin tablets (300 mg bd) with that of UDCA tablets (up to 2000 mg daily) in reducing pruritus in women with ICP, using visual pruritus scores as a measuring tool. Our study will be the first to examine the outcomes of treatment specifically in the severe early onset form of ICP, comparing “standard” UDCA therapy with rif icin, and so be able to provide for the first-time high-quality evidence for use of rif icin in severe ICP. It will also allow an assessment of feasibility of a future trial to test whether elective early delivery in severe ICP is beneficial. Australian New Zealand Clinical Trials Registration Number (ANZCTR): 12618000332224p (29/08/2018). HREC No: HREC/18/WCHN/36. EudraCT number: 2018–004011-44. IRAS: 272398. NHMRC registration: APP1152418 and APP117853.
Publisher: Wiley
Date: 06-08-2019
DOI: 10.1111/BJH.16149
Publisher: Elsevier BV
Date: 12-2021
DOI: 10.1016/J.CLINBIOCHEM.2021.09.007
Abstract: Internal quality control (IQC) is traditionally interpreted against predefined control limits using multi-rules or 'Westgard rules'. These include the commonly used 1:3s and 2:2s rules. Either in idually or in combination, these rules have limited sensitivity for detection of systematic errors. In this proof-of-concept study, we directly compare the performance of three moving average algorithms with Westgard rules for detection of systematic error. In this simulation study, 'error-free' IQC data (control case) was generated. Westgard rules (1:3s and 2:2s) and three moving average algorithms (simple moving average (SMA), weighted moving average (WMA), exponentially weighted moving average (EWMA) all using ±3SD as control limits) were applied to examine the false positive rates. Following this, systematic errors were introduced to the baseline IQC data to evaluate the probability of error detection and average number of episodes for error detection (ANEed). From the power function graphs, in comparison to Westgard rules, all three moving average algorithms showed better probability of error detection. Additionally, they also had lower ANEed compared to Westgard rules. False positive rates were comparable between the moving average algorithms and Westgard rules (all <0.5%). The performance of the SMA algorithm was comparable to the weighted algorithms forms (i.e. WMA and EWMA). Application of an SMA algorithm on IQC data improves systematic error detection compared to Westgard rules. Application of SMA algorithms can simplify laboratories IQC strategy.
Publisher: Walter de Gruyter GmbH
Date: 16-11-2019
Abstract: The delta check time interval limit is the maximum time window within which two sequential results of a patient will be evaluated by the delta check rule. The impact of time interval on delta check performance is not well studied. De-identified historical laboratory data were extracted from the laboratory information system and ided into children (≤18 years) and adults ( years). The relative and absolute differences of the original pair of results from each patient were compared against the delta check limits associated with 90% specificity. The data were then randomly reshuffled to simulate a switched (misidentified) s le scenario. The data were ided into 1-day, 3-day, 7-day, 14-day, 1-month, 3-month, 6-month and 1-year time interval bins. The true positive- and false-positive rates at different intervals were examined. Overall, 24 biochemical and 20 haematological tests were analysed. For nearly all the analytes, there was no statistical evidence of any difference in the true- or false-positive rates of the delta check rules at different time intervals when compared to the overall data. The only exceptions to this were mean corpuscular volume (using both relative- and absolute-difference delta check) and mean corpuscular haemoglobin (only absolute-difference delta check) in the children population, where the false-positive rates became significantly lower at 1-year interval. This study showed that there is no optimal delta check time interval. This fills an important evidence gap for future guidance development.
Publisher: Oxford University Press (OUP)
Date: 21-02-2018
DOI: 10.1093/AJCP/AQX165
Abstract: There is currently a lack of an outcomes-based definition of critical values for the pediatric population. This has contributed to a highly heterogeneous critical value reporting practice between laboratories. Anonymized results were extracted from a laboratory information system for 10 biochemistry tests. The probability of high-dependency/intensive care unit admission (as a proxy for adverse outcomes) for each in idual laboratory concentration was calculated and adjusted to fit using a polynomial function to model the probability trend. The laboratory value that intersected the 90% probability trend line was considered the critical value threshold. The critical value thresholds for the serum analytes were sodium (mmol/L: 148), potassium (mmol/L: 6.4), bicarbonate (mmol/L: 37), chloride (mmol/L: 115), urea (mmol/L: >12), creatinine (μmol/L: >129), glucose (mmol/L: >17.2), total calcium (mmol/L: <1.9), magnesium (mmol/L: 1.2), and phosphate (mmol/L: 2.6). This study described an approach to derive contemporary pediatric critical value thresholds.
Publisher: Informa UK Limited
Date: 14-10-2023
Publisher: Annals of Laboratory Medicine
Date: 21-04-2023
Publisher: Walter de Gruyter GmbH
Date: 16-01-2023
Location: Australia
No related grants have been discovered for Corey Markus.