ORCID Profile
0000-0002-6581-094X
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
In Research Link Australia (RLA), "Research Topics" refer to ANZSRC FOR and SEO codes. These topics are either sourced from ANZSRC FOR and SEO codes listed in researchers' related grants or generated by a large language model (LLM) based on their publications.
Central Nervous System | Pattern Recognition and Data Mining | Medical Devices | Artificial Intelligence and Image Processing
Expanding Knowledge in the Information and Computing Sciences | Health and Support Services not elsewhere classified |
Publisher: JMIR Publications Inc.
Date: 12-12-2021
Abstract: ssessment of the quality of medical evidence available on the web is a critical step in the preparation of systematic reviews. Existing tools that automate parts of this task validate the quality of in idual studies but not of entire bodies of evidence and focus on a restricted set of quality criteria. e proposed a quality assessment task that provides an overall quality rating for each body of evidence (BoE), as well as finer-grained justification for different quality criteria according to the Grading of Recommendation, Assessment, Development, and Evaluation formalization framework. For this purpose, we constructed a new data set and developed a machine learning baseline system (EvidenceGRADEr). e algorithmically extracted quality-related data from all summaries of findings found in the Cochrane Database of Systematic Reviews. Each BoE was defined by a set of population, intervention, comparison, and outcome criteria and assigned a quality grade (high, moderate, low, or very low) together with quality criteria (justification) that influenced that decision. Different statistical data, metadata about the review, and parts of the review text were extracted as support for grading each BoE. After pruning the resulting data set with various quality checks, we used it to train several neural-model variants. The predictions were compared against the labels originally assigned by the authors of the systematic reviews. ur quality assessment data set, Cochrane Database of Systematic Reviews Quality of Evidence, contains 13,440 instances, or BoEs labeled for quality, originating from 2252 systematic reviews published on the internet from 2002 to 2020. On the basis of a 10-fold cross-validation, the best neural binary classifiers for quality criteria detected risk of bias at 0.78 i F /i sub /sub ( i P /i =.68 R=0.92) and imprecision at 0.75 i F /i sub /sub ( i P /i =.66 R=0.86), while the performance on inconsistency, indirectness, and publication bias criteria was lower ( i F /i sub /sub in the range of 0.3-0.4). The prediction of the overall quality grade into 1 of the 4 levels resulted in 0.5 i F /i sub /sub . When casting the task as a binary problem by merging the Grading of Recommendation, Assessment, Development, and Evaluation classes (high+moderate vs low+very low-quality evidence), we attained 0.74 i F /i sub /sub . We also found that the results varied depending on the supporting information that is provided as an input to the models. ifferent factors affect the quality of evidence in the context of systematic reviews of medical evidence. Some of these (risk of bias and imprecision) can be automated with reasonable accuracy. Other quality dimensions such as indirectness, inconsistency, and publication bias prove more challenging for machine learning, largely because they are much rarer. This technology could substantially reduce reviewer workload in the future and expedite quality assessment as part of evidence synthesis.
Publisher: Oxford University Press (OUP)
Date: 30-06-2014
Publisher: Oxford University Press (OUP)
Date: 10-02-2014
Publisher: JMIR Publications Inc.
Date: 13-03-2023
DOI: 10.2196/35568
Abstract: Assessment of the quality of medical evidence available on the web is a critical step in the preparation of systematic reviews. Existing tools that automate parts of this task validate the quality of in idual studies but not of entire bodies of evidence and focus on a restricted set of quality criteria. We proposed a quality assessment task that provides an overall quality rating for each body of evidence (BoE), as well as finer-grained justification for different quality criteria according to the Grading of Recommendation, Assessment, Development, and Evaluation formalization framework. For this purpose, we constructed a new data set and developed a machine learning baseline system (EvidenceGRADEr). We algorithmically extracted quality-related data from all summaries of findings found in the Cochrane Database of Systematic Reviews. Each BoE was defined by a set of population, intervention, comparison, and outcome criteria and assigned a quality grade (high, moderate, low, or very low) together with quality criteria (justification) that influenced that decision. Different statistical data, metadata about the review, and parts of the review text were extracted as support for grading each BoE. After pruning the resulting data set with various quality checks, we used it to train several neural-model variants. The predictions were compared against the labels originally assigned by the authors of the systematic reviews. Our quality assessment data set, Cochrane Database of Systematic Reviews Quality of Evidence, contains 13,440 instances, or BoEs labeled for quality, originating from 2252 systematic reviews published on the internet from 2002 to 2020. On the basis of a 10-fold cross-validation, the best neural binary classifiers for quality criteria detected risk of bias at 0.78 F1 (P=.68 R=0.92) and imprecision at 0.75 F1 (P=.66 R=0.86), while the performance on inconsistency, indirectness, and publication bias criteria was lower (F1 in the range of 0.3-0.4). The prediction of the overall quality grade into 1 of the 4 levels resulted in 0.5 F1. When casting the task as a binary problem by merging the Grading of Recommendation, Assessment, Development, and Evaluation classes (high+moderate vs low+very low-quality evidence), we attained 0.74 F1. We also found that the results varied depending on the supporting information that is provided as an input to the models. Different factors affect the quality of evidence in the context of systematic reviews of medical evidence. Some of these (risk of bias and imprecision) can be automated with reasonable accuracy. Other quality dimensions such as indirectness, inconsistency, and publication bias prove more challenging for machine learning, largely because they are much rarer. This technology could substantially reduce reviewer workload in the future and expedite quality assessment as part of evidence synthesis.
Publisher: Springer International Publishing
Date: 2021
Publisher: Cold Spring Harbor Laboratory
Date: 06-07-2018
DOI: 10.1101/363473
Abstract: As the cost of DNA sequencing continues to fall, an increasing amount of information on human genetic variation is being produced that could help progress precision medicine. However, information about such mutations is typically first made available in the scientific literature, and is then later manually curated into more standardized genomic databases. This curation process is expensive, time-consuming and many variants do not end up being fully curated, if at all. Detecting mutations in the literature is the first key step towards automating this process. However, most of the current methods have focused on identifying mutations that follow existing nomenclatures. In this work, we show that there is a large number of mutations that are missed by using this standard approach. Furthermore, we implement the first mutation annotator to cover an extended mutation landscape, and we show that its F1 performance is the same performance as human annotation (F1 78.29 for manual annotation vs F1 79.56 for automatic annotation).
Publisher: PeerJ
Date: 23-10-2014
DOI: 10.7717/PEERJ.639
Publisher: F1000 Research Ltd
Date: 10-06-2014
DOI: 10.12688/F1000RESEARCH.3-18.V2
Abstract: As the cost of genomic sequencing continues to fall, the amount of data being collected and studied for the purpose of understanding the genetic basis of disease is increasing dramatically. Much of the source information relevant to such efforts is available only from unstructured sources such as the scientific literature, and significant resources are expended in manually curating and structuring the information in the literature. As such, there have been a number of systems developed to target automatic extraction of mutations and other genetic variation from the literature using text mining tools. We have performed a broad survey of the existing publicly available tools for extraction of genetic variants from the scientific literature. We consider not just one tool but a number of different tools, in idually and in combination, and apply the tools in two scenarios. First, they are compared in an intrinsic evaluation context, where the tools are tested for their ability to identify specific mentions of genetic variants in a corpus of manually annotated papers, the Variome corpus. Second, they are compared in an extrinsic evaluation context based on our previous study of text mining support for curation of the COSMIC and InSiGHT databases. Our results demonstrate that no single tool covers the full range of genetic variants mentioned in the literature. Rather, several tools have complementary coverage and can be used together effectively. In the intrinsic evaluation on the Variome corpus, the combined performance is above 0.95 in F-measure, while in the extrinsic evaluation the combined recall performance is above 0.71 for COSMIC and above 0.62 for InSiGHT, a substantial improvement over the performance of any in idual tool. Based on the analysis of these results, we suggest several directions for the improvement of text mining tools for genetic variant extraction from the literature.
Publisher: Oxford University Press (OUP)
Date: 12-04-2013
Publisher: Springer International Publishing
Date: 2020
Publisher: Elsevier BV
Date: 09-2018
DOI: 10.1016/J.CLCC.2018.05.008
Abstract: Multiple studies have defined the prognostic and potential predictive significance of the primary tumor side in metastatic colorectal cancer (CRC). However, the currently available data for early-stage disease are limited and inconsistent. We explored the clinicopathologic, treatment, and outcome data from a multisite Australian CRC registry from 2003 to 2016. Tumors at and distal to the splenic flexure were considered a left primary (LP). For the 6547 patients identified, the median age at diagnosis was 69 years, 55% were men, and most (63%) had a LP. Comparing the outcomes for right primary (RP) versus LP, time-to-recurrence was similar for stage I and III disease, but longer for those with a stage II RP (hazard ratio [HR], 0.68 95% confidence interval [CI], 0.52-0.90 P < .01). Adjuvant chemotherapy provided a consistent benefit in stage III disease, regardless of the tumor side. Overall survival (OS) was similar for those with stage I and II disease between LP and RP patients however, those with stage III RP disease had poorer OS (HR, 1.30 95% CI, 1.04-1.62 P < .05) and cancer-specific survival (HR, 1.55 95% CI, 1.19-2.03 P < .01). Patients with stage IV RP, whether de novo metastatic (HR, 1.15 95% CI, 0.95-1.39) or relapsed post-early-stage disease (HR, 1.35 95% CI, 1.11-1.65 P < .01), had poorer OS. In early-stage CRC, the association of tumor side and effect on the time-to-recurrence and OS varies by stage. In stage III patients with an RP, poorer OS and cancer-specific survival outcomes are, in part, driven by inferior survival after recurrence, and tumor side did not influence adjuvant chemotherapy benefit.
Publisher: SAGE Publications
Date: 2021
DOI: 10.1177/23312165211066174
Abstract: While cochlear implants have helped hundreds of thousands of in iduals, it remains difficult to predict the extent to which an in idual’s hearing will benefit from implantation. Several publications indicate that machine learning may improve predictive accuracy of cochlear implant outcomes compared to classical statistical methods. However, existing studies are limited in terms of model validation and evaluating factors like s le size on predictive performance. We conduct a thorough examination of machine learning approaches to predict word recognition scores (WRS) measured approximately 12 months after implantation in adults with post-lingual hearing loss. This is the largest retrospective study of cochlear implant outcomes to date, evaluating 2,489 cochlear implant recipients from three clinics. We demonstrate that while machine learning models significantly outperform linear models in prediction of WRS, their overall accuracy remains limited (mean absolute error: 17.9-21.8). The models are robust across clinical cohorts, with predictive error increasing by at most 16% when evaluated on a clinic excluded from the training set. We show that predictive improvement is unlikely to be improved by increasing s le size alone, with doubling of s le size estimated to only increasing performance by 3% on the combined dataset. Finally, we demonstrate how the current models could support clinical decision making, highlighting that subsets of in iduals can be identified that have a 94% chance of improving WRS by at least 10% points after implantation, which is likely to be clinically meaningful. We discuss several implications of this analysis, focusing on the need to improve and standardize data collection.
Publisher: SAGE Publications
Date: 2021
DOI: 10.1177/23312165211037525
Abstract: While the majority of cochlear implant recipients benefit from the device, it remains difficult to estimate the degree of benefit for a specific patient prior to implantation. Using data from 2,735 cochlear-implant recipients from across three clinics, the largest retrospective study of cochlear-implant outcomes to date, we investigate the association between 21 preoperative factors and speech recognition approximately one year after implantation and explore the consistency of their effects across the three constituent datasets. We provide evidence of 17 statistically significant associations, in either univariate or multivariate analysis, including confirmation of associations for several predictive factors, which have only been examined in prior smaller studies. Despite the large s le size, a multivariate analysis shows that the variance explained by our models remains modest across the datasets ([Formula: see text]–0.21). Finally, we report a novel statistical interaction indicating that the duration of deafness in the implanted ear has a stronger impact on hearing outcome when considered relative to a candidate’s age. Our multicenter study highlights several real-world complexities that impact the clinical translation of predictive factors for cochlear implantation outcome. We suggest several directions to overcome these challenges and further improve our ability to model patient outcomes with increased accuracy.
No related organisations have been discovered for Antonio José Jimeno Yepes.
Start Date: 06-2018
End Date: 12-2024
Amount: $4,133,659.00
Funder: Australian Research Council
View Funded Activity