ORCID Profile
0000-0002-5571-6220
Current Organisation
University of Surrey
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
In Research Link Australia (RLA), "Research Topics" refer to ANZSRC FOR and SEO codes. These topics are either sourced from ANZSRC FOR and SEO codes listed in researchers' related grants or generated by a large language model (LLM) based on their publications.
Artificial Intelligence and Image Processing | Computer Vision | Pattern Recognition and Data Mining | Adaptive Agents and Intelligent Robotics | Analytical Chemistry | Sensor Technology (Chemical aspects) | Immunological and Bioassay Methods | Biomedical Engineering not elsewhere classified | Nanobiotechnology | Image Processing
Expanding Knowledge in the Information and Computing Sciences | Expanding Knowledge in Engineering | Computer Software and Services not elsewhere classified | Plant Production and Plant Primary Products not elsewhere classified | Expanding Knowledge in the Biological Sciences | Manufacturing not elsewhere classified | Expanding Knowledge in Technology |
Publisher: arXiv
Date: 2022
Publisher: Springer International Publishing
Date: 2020
Publisher: Springer International Publishing
Date: 2018
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 11-2017
Publisher: Springer International Publishing
Date: 2017
Publisher: Springer International Publishing
Date: 2017
Publisher: IEEE
Date: 09-2017
Publisher: IEEE
Date: 2023
Publisher: Elsevier BV
Date: 02-2020
DOI: 10.1016/J.ULTRASMEDBIO.2019.10.027
Abstract: Ultrasound guidance is not in widespread use in prostate cancer radiotherapy workflows. This can be partially attributed to the need for image interpretation by a trained operator during ultrasound image acquisition. In this work, a one-class regressor, based on DenseNet and Gaussian processes, was implemented to automatically assess the quality of transperineal ultrasound images of the male pelvic region. The implemented deep learning approach was tested on 300 transperineal ultrasound images and it achieved a scoring accuracy of 94%, a specificity of 95% and a sensitivity of 92% with respect to the majority vote of 3 experts, which was comparable with the results of these experts. This is the first step toward a fully automatic workflow, which could potentially remove the need for ultrasound image interpretation and make real-time volumetric organ tracking in the radiotherapy environment using ultrasound more appealing.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2020
Publisher: Springer Nature Switzerland
Date: 2022
Publisher: Elsevier
Date: 2022
Publisher: Elsevier BV
Date: 02-2020
DOI: 10.1016/J.MEDIA.2019.101631
Abstract: The tracking of the knee femoral condyle cartilage during ultrasound-guided minimally invasive procedures is important to avoid damaging this structure during such interventions. In this study, we propose a new deep learning method to track, accurately and efficiently, the femoral condyle cartilage in ultrasound sequences, which were acquired under several clinical conditions, mimicking realistic surgical setups. Our solution, that we name Siam-U-Net, requires minimal user initialization and combines a deep learning segmentation method with a siamese framework for tracking the cartilage in temporal and spatio-temporal sequences of 2D ultrasound images. Through extensive performance validation given by the Dice Similarity Coefficient, we demonstrate that our algorithm is able to track the femoral condyle cartilage with an accuracy which is comparable to experienced surgeons. It is additionally shown that the proposed method outperforms state-of-the-art segmentation models and trackers in the localization of the cartilage. We claim that the proposed solution has the potential for ultrasound guidance in minimally invasive knee procedures.
Publisher: Springer International Publishing
Date: 2021
Publisher: Springer Nature Switzerland
Date: 2022
Publisher: Elsevier BV
Date: 05-2020
Publisher: Springer Berlin Heidelberg
Date: 2013
DOI: 10.1007/978-3-642-40811-3_57
Abstract: In this paper we describe an algorithm for accurately segmenting the in idual cytoplasm and nuclei from a clump of overlapping cervical cells. Current methods cannot undertake such a complete segmentation due to the challenges involved in delineating cells with severe overlap and poor contrast. Our approach initially performs a scene segmentation to highlight both free-lying cells, cell clumps and their nuclei. Then cell segmentation is performed using a joint level set optimization on all detected nuclei and cytoplasm pairs. This optimisation is constrained by the length and area of each cell, a prior on cell shape, the amount of cell overlap and the expected gray values within the overlapping regions. We present quantitative nuclei detection and cell segmentation results on a database of synthetically overlapped cell images constructed from real images of free-lying cervical cells. We also perform a qualitative assessment of complete fields of view containing multiple cells and cell clumps.
Publisher: IEEE
Date: 10-2018
Publisher: Springer Berlin Heidelberg
Date: 2009
DOI: 10.1007/978-3-642-04271-3_70
Abstract: We present a novel method for the automatic detection and segmentation of (sub-)cortical gray matter structures in 3-D magnetic resonance images of the human brain. Essentially, the method is a top-down segmentation approach based on the recently introduced concept of Marginal Space Learning (MSL). We show that MSL naturally decomposes the parameter space of anatomy shapes along decreasing levels of geometrical abstraction into subspaces of increasing dimensionality by exploiting parameter invariance. At each level of abstraction, i.e., in each subspace, we build strong discriminative models from annotated training data, and use these models to narrow the range of possible solutions until a final shape can be inferred. Contextual information is introduced into the system by representing candidate shape parameters with high-dimensional vectors of 3-D generalized Haar features and steerable features derived from the observed volume intensities. Our system allows us to detect and segment 8 (sub-)cortical gray matter structures in T1-weighted 3-D MR brain scans from a variety of different scanners in on average 13.9 sec., which is faster than most of the approaches in the literature. In order to ensure comparability of the achieved results and to validate robustness, we evaluate our method on two publicly available gold standard databases consisting of several T1-weighted 3-D brain MR scans from different scanners and sites. The proposed method achieves an accuracy better than most state-of-the-art approaches using standardized distance and overlap metrics.
Publisher: Informa UK Limited
Date: 05-04-2018
Publisher: Springer Science and Business Media LLC
Date: 18-01-2021
DOI: 10.1038/S41598-020-80441-8
Abstract: The increased ersity and scale of published biological data has to led to a growing appreciation for the applications of machine learning and statistical methodologies to gain new insights. Key to achieving this aim is solving the Relationship Extraction problem which specifies the semantic interaction between two or more biological entities in a published study. Here, we employed two deep neural network natural language processing (NLP) methods, namely: the continuous bag of words (CBOW), and the bi-directional long short-term memory (bi-LSTM). These methods were employed to predict relations between entities that describe protein subcellular localisation in plants. We applied our system to 1700 published Arabidopsis protein subcellular studies from the SUBA manually curated dataset. The system combines pre-processing of full-text articles in a machine-readable format with relevant sentence extraction for downstream NLP analysis. Using the SUBA corpus, the neural network classifier predicted interactions between protein name, subcellular localisation and experimental methodology with an average precision, recall rate, accuracy and F1 scores of 95.1%, 82.8%, 89.3% and 88.4% respectively (n = 30). Comparable scoring metrics were obtained using the CropPAL database as an independent testing dataset that stores protein subcellular localisation in crop species, demonstrating wide applicability of prediction model. We provide a framework for extracting protein functional features from unstructured text in the literature with high accuracy, improving data dissemination and unlocking the potential of big data text analytics for generating new hypotheses.
Publisher: IEEE
Date: 11-2013
Publisher: Springer International Publishing
Date: 2017
Publisher: IEEE
Date: 04-2017
Publisher: IEEE
Date: 12-2015
Publisher: IEEE
Date: 06-2022
Publisher: Elsevier BV
Date: 2017
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 12-2020
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2023
Publisher: Springer Berlin Heidelberg
Date: 2012
Publisher: IEEE
Date: 09-2013
Publisher: IEEE
Date: 06-2010
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 08-2013
Publisher: IEEE
Date: 04-2010
Publisher: IEEE
Date: 13-04-2021
Publisher: IEEE
Date: 10-2021
Publisher: Elsevier BV
Date: 05-2022
Publisher: Springer Berlin Heidelberg
Date: 2008
DOI: 10.1007/978-3-540-85988-8_9
Abstract: In this paper we present a fully automated approach to the segmentation of pediatric brain tumors in multi-spectral 3-D magnetic resonance images. It is a top-down segmentation approach based on a Markov random field (MRF) model that combines probabilistic boosting trees (PBT) and lower-level segmentation via graph cuts. The PBT algorithm provides a strong discriminative observation model that classifies tumor appearance while a spatial prior takes into account the pair-wise homogeneity in terms of classification labels and multi-spectral voxel intensities. The discriminative model relies not only on observed local intensities but also on surrounding context for detecting candidate regions for pathology. A mathematically sound formulation for integrating the two approaches into a unified statistical framework is given. The proposed method is applied to the challenging task of detection and delineation of pediatric brain tumors. This segmentation task is characterized by a high non-uniformity of both the pathology and the surrounding non-pathologic brain tissue. A quantitative evaluation illustrates the robustness of the proposed method. Despite dealing with more complicated cases of pediatric brain tumors the results obtained are mostly better than those reported for current state-of-the-art approaches to 3-D MR brain tumor segmentation in adult patients. The entire processing of one multi-spectral data set does not require any user interaction, and takes less time than previously proposed methods.
Publisher: IEEE
Date: 03-2016
Publisher: Elsevier BV
Date: 10-2020
Publisher: IEEE
Date: 2002
Publisher: Springer Berlin Heidelberg
Date: 2012
Publisher: IEEE
Date: 09-2015
Publisher: IEEE
Date: 09-2013
Publisher: arXiv
Date: 2022
Publisher: Elsevier BV
Date: 02-2020
DOI: 10.1016/J.ULTRASMEDBIO.2019.10.015
Abstract: Knee arthroscopy is a minimally invasive surgery used in the treatment of intra-articular knee pathology which may cause unintended damage to femoral cartilage. An ultrasound (US)-guided autonomous robotic platform for knee arthroscopy can be envisioned to minimise these risks and possibly to improve surgical outcomes. The first necessary tool for reliable guidance during robotic surgeries was an automatic segmentation algorithm to outline the regions at risk. In this work, we studied the feasibility of using a state-of-the-art deep neural network (UNet) to automatically segment femoral cartilage imaged with dynamic volumetric US (at the refresh rate of 1 Hz), under simulated surgical conditions. Six volunteers were scanned which resulted in the extraction of 18278 2-D US images from 35 dynamic 3-D US scans, and these were manually labelled. The UNet was evaluated using a five-fold cross-validation with an average of 15531 training and 3124 testing labelled images per fold. An intra-observer study was performed to assess intra-observer variability due to inherent US physical properties. To account for this variability, a novel metric concept named Dice coefficient with boundary uncertainty (DSC
Publisher: Springer International Publishing
Date: 2016
Publisher: IEEE
Date: 2006
Publisher: IEEE
Date: 06-2014
Publisher: IEEE
Date: 06-2010
Publisher: Springer International Publishing
Date: 2016
Publisher: Elsevier BV
Date: 12-2023
Publisher: Cold Spring Harbor Laboratory
Date: 28-11-2022
DOI: 10.1101/2022.11.23.22282646
Abstract: Artificial intelligence (AI) readers, derived from applying deep learning models to medical image analysis, hold great promise for improving population breast cancer screening. However, previous evaluations of AI readers for breast cancer screening have mostly been conducted using cancer-enriched cohorts and have lacked assessment of the potential use of AI readers alongside radiologists in multi-reader screening programs. Here, we present a new AI reader for detecting breast cancer from mammograms in a large-scale population screening setting, and a novel analysis of the potential for human-AI reader collaboration in a well-established, high-performing population screening program. We evaluated the performance of our AI reader and AI-integrated screening scenarios using a two-year, real-world, population dataset from Victoria, Australia, a screening program in which two radiologists independently assess each episode and disagreements are arbitrated by a third radiologist. We used a retrospective full-field digital mammography image and non-image dataset comprising 808,318 episodes, 577,576 clients and 3,404,326 images in the period 2013 to 2019. Screening episodes from 2016, 2017 and 2018 were sequential population cohorts containing 752,609 episodes, 565,087 clients and 3,169,322 images. The dataset was split by screening date into training, development, and testing sets. All episodes from 2017 and 2018 were allocated to the testing set (509,109 episodes 3,651 screen-detected cancer episodes). Eight distinct AI models were trained on subsets of the training set (which included a validation set) and combined into our ensemble AI reader. Operating points were selected using the development set. We evaluated our AI reader on our testing set and on external datasets previously unseen by our models. The AI reader outperformed the mean in idual radiologist on this large retrospective testing dataset with an area under the receiver operator characteristic curve of 0.92 (95% CI 0.91, 0.92). The AI reader generalised well across screening round, client demographics, device manufacturer and cancer type, and achieved state-of-the-art performance on external datasets compared to recently published AI readers. Our simulations of AI-integrated screening scenarios demonstrated that a reader-replacement human-AI collaborative system could achieve better sensitivity and specificity (82.6%, 96.1%) compared to the current two-reader consensus system (79.9%, 96.0%), with reduced human reading workload and cost. Our band-pass AI-integrated scenario also enabled both higher sensitivity and specificity (80.6%, 96.2%) with larger reductions in human reading workload and cost. This study demonstrated that human-AI collaboration in a population breast cancer screening program has potential to improve accuracy and lower radiologist workload and costs in real world screening programs. The next stage of validation is to undertake prospective studies that can also assess the effects of the AI systems on human performance and behaviour.
Publisher: IEEE
Date: 12-2019
Publisher: Elsevier BV
Date: 02-2023
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 04-2015
Publisher: IEEE
Date: 04-2019
Publisher: SPIE
Date: 10-02-2011
DOI: 10.1117/12.872026
Publisher: Elsevier BV
Date: 12-2019
DOI: 10.1016/J.MEDIA.2019.101562
Abstract: We propose a new method for breast cancer screening from DCE-MRI based on a post-hoc approach that is trained using weakly annotated data (i.e., labels are available only at the image level without any lesion delineation). Our proposed post-hoc method automatically diagnosis the whole volume and, for positive cases, it localizes the malignant lesions that led to such diagnosis. Conversely, traditional approaches follow a pre-hoc approach that initially localises suspicious areas that are subsequently classified to establish the breast malignancy - this approach is trained using strongly annotated data (i.e., it needs a delineation and classification of all lesions in an image). We also aim to establish the advantages and disadvantages of both approaches when applied to breast screening from DCE-MRI. Relying on experiments on a breast DCE-MRI dataset that contains scans of 117 patients, our results show that the post-hoc method is more accurate for diagnosing the whole volume per patient, achieving an AUC of 0.91, while the pre-hoc method achieves an AUC of 0.81. However, the performance for localising the malignant lesions remains challenging for the post-hoc method due to the weakly labelled dataset employed during training.
Publisher: IEEE
Date: 09-2015
Publisher: IEEE
Date: 08-2010
Publisher: IEEE
Date: 06-2019
Publisher: Springer International Publishing
Date: 2021
Publisher: Frontiers Media SA
Date: 22-06-2022
Abstract: Artificial Intelligence (AI) is rapidly evolving in gastrointestinal (GI) endoscopy. We undertook a systematic review and meta-analysis to assess the performance of AI at detecting early Barrett's neoplasia. We searched Medline, EMBASE and Cochrane Central Register of controlled trials database from inception to the 28th Jan 2022 to identify studies on the detection of early Barrett's neoplasia using AI. Study quality was assessed using Quality Assessment of Diagnostic Accuracy Studies – 2 (QUADAS-2). A random-effects model was used to calculate pooled sensitivity, specificity, and diagnostics odds ratio (DOR). Forest plots and a summary of the receiving operating characteristics (SROC) curves displayed the outcomes. Heterogeneity was determined by I 2 , Tau 2 statistics and p -value. The funnel plots and Deek's test were used to assess publication bias. Twelve studies comprising of 1,361 patients (utilizing 532,328 images on which the various AI models were trained) were used. The SROC was 0.94 (95% CI: 0.92–0.96). Pooled sensitivity, specificity and diagnostic odds ratio were 90.3% (95% CI: 87.1–92.7%), 84.4% (95% CI: 80.2–87.9%) and 48.1 (95% CI: 28.4–81.5), respectively. Subgroup analysis of AI models trained only on white light endoscopy was similar with pooled sensitivity and specificity of 91.2% (95% CI: 85.7–94.7%) and 85.1% (95% CI: 81.6%−88.1%), respectively. AI is highly accurate at detecting early Barrett's neoplasia and validated for patients with at least high-grade dysplasia and above. Further well-designed prospective randomized controlled studies of all histopathological subtypes of early Barrett's neoplasia are needed to confirm these findings further.
Publisher: Springer Berlin Heidelberg
Date: 2006
DOI: 10.1007/11744078_3
Publisher: IEEE
Date: 04-2019
Publisher: IEEE
Date: 04-2020
Publisher: Springer International Publishing
Date: 2015
Publisher: Springer Nature Switzerland
Date: 2022
Publisher: Springer International Publishing
Date: 2020
Publisher: Springer Nature Switzerland
Date: 2022
Publisher: IEEE
Date: 03-2011
Publisher: Springer Nature Switzerland
Date: 2022
Publisher: Springer Nature Switzerland
Date: 2022
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2014
Publisher: Springer International Publishing
Date: 2021
Publisher: IEEE
Date: 12-2019
Publisher: IEEE
Date: 12-2015
DOI: 10.1109/ICCV.2015.81
Publisher: IEEE
Date: 06-2008
Publisher: CRC Press
Date: 17-11-2016
Publisher: IEEE
Date: 2021
Publisher: Elsevier BV
Date: 10-2011
Publisher: Informa UK Limited
Date: 07-07-2014
DOI: 10.1185/03007995.2014.936932
Abstract: This review of the literature aims to explore two research questions: (1) what is the evidence that patients benefit from sound communication between primary care practitioners (PCPs) and nephrologists and (2) what information is required in primary care to meet the needs of patients who have attended a renal unit? Fifty-seven citations were independently reviewed by four authors. The inclusion criteria were: (1) the article focused on information flow from nephrologists and/or specialists to general practitioners (2) it includes the involvement of PCPs in nephrology, including registrars and PCPs with special interests or specialists in any medical field (3) it was published from 1990 onwards (inclusive) and (4) the study was conducted in the United Kingdom, Canada, The Netherlands, Australia, United States or New Zealand. Selected articles were then reviewed by the fifth author as a measure of inter-rater reliability. Eighteen papers in four categories were identified: six audits or observational studies, one meta-analysis one randomized controlled trial six qualitative studies and four position statements or quality improvement tools. Published audits involving feedback to clinicians using validated tools demonstrate the scope for substantial improvement in the amount of information relayed to PCPs. Specialists may not prioritize the letter to the PCP but there is some evidence of a direct impact from limited or inadequate communication on patient outcomes. Only two studies focused on patients attending nephrology clinics. There is some evidence that improving the quality of letters from specialists to PCPs may benefit patient care. This review suggests a need for research on communication from nephrologists about patients who have received care at a renal unit regardless of whether or not the patient continues to attend.
Publisher: IEEE
Date: 06-2012
Publisher: Springer International Publishing
Date: 2021
Publisher: IEEE
Date: 09-2012
Publisher: Informa UK Limited
Date: 06-09-2013
DOI: 10.1185/03007995.2013.838154
Abstract: Primary health services are well placed to reinforce prevention, early intervention, and connected care. Despite this important role, primary care providers (PCPs) have a limited capacity to meet the varied needs of people with cancer and their carers - furthermore, the reasons for this largely remain unexplored. To identify: (1) the knowledge, attitudes, and beliefs held by health professionals and patients that can influence the engagement of PCPs with the early detection of cancer and follow-up care (2) evidence that attitudes and beliefs can be modified with measureable impact on the engagement of PCPs with cancer care and (3) potential targets for intervention. This was achieved through a review of English publications from 2000 onwards, sourced from six academic databases and complemented with a search for grey literature. A total of 4212 articles were reviewed to identify studies conducted in the UK, Canada, Holland (or The Netherlands), Australia, or New Zealand given the comparable role of PCPs. Several factors hinder PCP participation in cancer care, all of which are related to knowledge, attitudes, and beliefs. Patients and specialists are uncertain about the role that primary care could play and whether their primary care team has the necessary expertise. PCPs have varied opinions about the ideal content of follow-up programs. Study limitations include: the absence of well accepted definitions of key terms the indexing systems used by databases to code publications, which may have obscured all relevant publications the paucity of robust research and possible researcher bias which was minimized through independent review by trained reviewers and the implementation of rigorous inter-rater reliability measures. Knowledge, attitudes, and beliefs influence PCP engagement in cancer care. It is important to develop shared understandings of these terms because the knowledge, attitudes, and beliefs of PCPs, specialists, patients, and their families can influence the effectiveness of treatment plans.
Publisher: IEEE
Date: 11-07-2022
Publisher: IEEE
Date: 2007
Publisher: IEEE
Date: 11-2011
Publisher: Springer International Publishing
Date: 2017
Publisher: IEEE
Date: 10-2017
Publisher: EDUFU - Editora da Universidade Federal de Uberlandia
Date: 30-03-2020
Abstract: It is challenging to map the spatial distribution of natural and planted forests based on satellite images because of the high correlation among them. This investigation aims to increase accuracies in classifications of natural forests and eucalyptus plantations by combining remote sensing data from multiple sources. We defined four vegetation classes: natural forest (NF), planted eucalyptus forest (PF), agriculture (A) and pasture (P), and s led 410,251 pixels from 100 polygons of each class. Classification experiments were performed by using a random forest algorithm with images from Landsat-8, Sentinel-1, and SRTM. We considered four texture features (energy, contrast, correlation, and entropy) and NDVI. We used F1-score, overall accuracy and total disagreement metrics, to assess the classification performance, and Jeffries–Matusita (JM) distance to measure the spectral separability. Overall accuracy for Landsat-8 bands alone was 88.29%. A combination of Landsat-8 with Sentinel-1 bands resulted in a 3% overall accuracy increase and this band combination also improved the F1-score of NF, PF, P and A in 2.22%, 2.9%, 3.71%, and 8.01%, respectively. The total disagreement decreased from 11.71% to 8.71%. The increase in the statistical separability corroborates such improvement and is mainly observed between NF-PF (11.98%) and A-P (45.12%). We conclude that combining optical and radar remote sensing data increased the classification accuracy of natural and planted forests and may serve as a basis for large-scale semi-automatic mapping of forest resources.
Publisher: Informa UK Limited
Date: 02-09-2020
Publisher: IEEE
Date: 04-2020
Publisher: Elsevier BV
Date: 2023
Publisher: IEEE
Date: 11-2021
Publisher: IEEE
Date: 06-2020
Publisher: Springer International Publishing
Date: 2021
Publisher: Informa UK Limited
Date: 04-05-2019
Publisher: Springer Nature Switzerland
Date: 2022
Publisher: IEEE
Date: 11-2015
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 12-2020
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 10-2017
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2023
Publisher: ACM
Date: 18-04-2011
Publisher: Elsevier BV
Date: 11-2017
Publisher: IEEE
Date: 06-2013
Publisher: Springer International Publishing
Date: 2021
Publisher: IEEE
Date: 2005
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 07-2017
Publisher: Springer Nature Switzerland
Date: 2022
Publisher: MDPI AG
Date: 25-07-2021
DOI: 10.3390/APP11156828
Abstract: This work presents an algorithm based on weak supervision to automatically localize an arthroscope on 3D ultrasound (US). The ultimate goal of this application is to combine 3D US with the 2D arthroscope view during knee arthroscopy, to provide the surgeon with a comprehensive view of the surgical site. The implemented algorithm consisted of a weakly supervised neural network, which was trained on 2D US images of different phantoms mimicking the imaging conditions during knee arthroscopy. Image-based classification was performed and the resulting class activation maps were used to localize the arthroscope. The localization performance was evaluated visually by three expert reviewers and by the calculation of objective metrics. Finally, the algorithm was also tested on a human cadaver knee. The algorithm achieved an average classification accuracy of 88.6% on phantom data and 83.3% on cadaver data. The localization of the arthroscope based on the class activation maps was correct in 92–100% of all true positive classifications for both phantom and cadaver data. These results are relevant because they show feasibility of automatic arthroscope localization in 3D US volumes, which is paramount to combining multiple image modalities that are available during knee arthroscopies.
Publisher: IEEE
Date: 11-2017
Publisher: IEEE
Date: 11-2016
Publisher: IEEE
Date: 04-2017
Publisher: Springer Science and Business Media LLC
Date: 26-09-2021
DOI: 10.1186/S12885-021-08773-W
Abstract: Artificial intelligence (AI) is increasingly being used in medical imaging analysis. We aimed to evaluate the diagnostic accuracy of AI models used for detection of lymph node metastasis on pre-operative staging imaging for colorectal cancer. A systematic review was conducted according to PRISMA guidelines using a literature search of PubMed (MEDLINE), EMBASE, IEEE Xplore and the Cochrane Library for studies published from January 2010 to October 2020. Studies reporting on the accuracy of radiomics models and/or deep learning for the detection of lymph node metastasis in colorectal cancer by CT/MRI were included. Conference abstracts and studies reporting accuracy of image segmentation rather than nodal classification were excluded. The quality of the studies was assessed using a modified questionnaire of the QUADAS-2 criteria. Characteristics and diagnostic measures from each study were extracted. Pooling of area under the receiver operating characteristic curve (AUROC) was calculated in a meta-analysis. Seventeen eligible studies were identified for inclusion in the systematic review, of which 12 used radiomics models and five used deep learning models. High risk of bias was found in two studies and there was significant heterogeneity among radiomics papers (73.0%). In rectal cancer, there was a per-patient AUROC of 0.808 (0.739–0.876) and 0.917 (0.882–0.952) for radiomics and deep learning models, respectively. Both models performed better than the radiologists who had an AUROC of 0.688 (0.603 to 0.772). Similarly in colorectal cancer, radiomics models with a per-patient AUROC of 0.727 (0.633–0.821) outperformed the radiologist who had an AUROC of 0.676 (0.627–0.725). AI models have the potential to predict lymph node metastasis more accurately in rectal and colorectal cancer, however, radiomics studies are heterogeneous and deep learning studies are scarce. PROSPERO CRD42020218004 .
Publisher: Springer Berlin Heidelberg
Date: 2002
Publisher: Springer International Publishing
Date: 2019
Publisher: IEEE
Date: 04-2017
Publisher: Bioscientifica
Date: 19-10-2021
DOI: 10.1530/RAF-21-0031
Abstract: Pouch of Douglas (POD) obliteration is a severe consequence of inflammation in the pelvis, often seen in patients with endometriosis. The sliding sign is a dynamic transvaginal ultrasound (TVS) test that can diagnose POD obliteration. We aimed to develop a deep learning (DL) model to automatically classify the state of the POD using recorded videos depicting the sliding sign test. Two expert sonologists performed, interpreted, and recorded videos of consecutive patients from September 2018 to April 2020. The sliding sign was classified as positive (i.e. normal) or negative (i.e. abnormal POD obliteration). A DL model based on a temporal residual network was prospectively trained with a dataset of TVS videos. The model was tested on an independent test set and its diagnostic accuracy including area under the receiver operating characteristic curve (AUC), accuracy, sensitivity, specificity, and positive and negative predictive value (PPV/NPV) was compared to the reference standard sonologist classification (positive or negative sliding sign). In a dataset consisting of 749 videos, a positive sliding sign was depicted in 646 (86.2%) videos, whereas 103 (13.8%) videos depicted a negative sliding sign. The dataset was split into training (414 videos), validation (139), and testing (196) maintaining similar positive/negative proportions. When applied to the test dataset using a threshold of 0.9, the model achieved: AUC 96.5% (95% CI: 90.8–100.0%), an accuracy of 88.8% (95% CI: 83.5–92.8%), sensitivity of 88.6% (95% CI: 83.0–92.9%), specificity of 90.0% (95% CI: 68.3–98.8%), a PPV of 98.7% (95% CI: 95.4–99.7%), and an NPV of 47.7% (95% CI: 36.8–58.2%). We have developed an accurate DL model for the prediction of the TVS-based sliding sign classification. Endometriosis is a disease that affects females. It can cause very severe scarring inside the body, especially in the pelvis − called the pouch of Douglas (POD). An ultrasound test called the 'sliding sign' can diagnose POD scarring. In our study, we provided input to a computer on how to interpret the sliding sign and determine whether there was POD scarring or not. This is a type of artificial intelligence called deep learning (DL). For this purpose, two expert ultrasound specialists recorded 749 videos of the sliding sign. Most of them (646) were normal and 103 showed POD scarring. In order for the computer to interpret, both normal and abnormal videos were required. After providing the necessary inputs to the computer, the DL model was very accurate (almost nine out of every ten videos was correctly determined by the DL model). In conclusion, we have developed an artificial intelligence that can interpret ultrasound videos of the sliding sign that show POD scarring that is almost as accurate as the ultrasound specialists. We believe this could help increase the knowledge on POD scarring in people with endometriosis.
Publisher: Springer Berlin Heidelberg
Date: 2002
Publisher: IEEE
Date: 09-2011
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2020
Publisher: ACM
Date: 22-10-2007
Publisher: Informa UK Limited
Date: 2009
DOI: 10.1080/14767050802415736
Abstract: We compared the performance between sonographers and automated fetal biometry measurements (Auto OB) with respect to the following measurements: biparietal diameter (BPD), head circumference (HC), abdominal circumference (AC) and femur length (FL). The first set of experiments involved assessing the performance of Auto OB relative to the five sonographers, using 240 images for each user. Each sonographer made measurements in 80 images per anatomy. The second set of experiments compared the performance of Auto OB with respect to the data generated by the five sonographers for inter-observer variability (i.e., sonographers and clinicians) using a set of 10 images per anatomy. Auto OB correlated well with manual measurements for BPD, HC, AC and FL (r > 0.98, p < 0.001 for all measurements). The errors produced by Auto OB for BPD is 1.46% (sigma = 1.74%), where sigma denotes standard deviation), for HC is 1.25% (sigma = 1.34%), for AC is 3% (sigma = 6.16%) and for FL is 3.52% (sigma = 3.72%). In general, these errors represent deviations of less than 3 days for fetuses younger than 30 weeks, and less than 7 days for fetuses between 30 and 40 weeks of age. The measurements produced by Auto OB are comparable to the measurements done by sonographers.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2023
Publisher: IEEE
Date: 04-2016
Publisher: Elsevier BV
Date: 2017
DOI: 10.1016/J.MEDIA.2016.05.009
Abstract: We introduce a new methodology that combines deep learning and level set for the automated segmentation of the left ventricle of the heart from cardiac cine magnetic resonance (MR) data. This combination is relevant for segmentation problems, where the visual object of interest presents large shape and appearance variations, but the annotated training set is small, which is the case for various medical image analysis applications, including the one considered in this paper. In particular, level set methods are based on shape and appearance terms that use small training sets, but present limitations for modelling the visual object variations. Deep learning methods can model such variations using relatively small amounts of annotated training, but they often need to be regularised to produce good generalisation. Therefore, the combination of these methods brings together the advantages of both approaches, producing a methodology that needs small training sets and produces accurate segmentation results. We test our methodology on the MICCAI 2009 left ventricle segmentation challenge database (containing 15 sequences for training, 15 for validation and 15 for testing), where our approach achieves the most accurate results in the semi-automated problem and state-of-the-art results for the fully automated challenge.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 12-2007
Publisher: IEEE
Date: 28-03-2022
Publisher: Springer International Publishing
Date: 2023
Publisher: Springer Nature Switzerland
Date: 2022
Publisher: SPIE
Date: 10-02-2011
DOI: 10.1117/12.872256
Publisher: Elsevier BV
Date: 2009
Publisher: IEEE
Date: 09-2015
Publisher: IEEE
Date: 10-01-2021
Publisher: IEEE
Date: 03-2020
Publisher: Springer International Publishing
Date: 2017
Publisher: IEEE
Date: 04-2015
Publisher: Elsevier
Date: 2017
Publisher: arXiv
Date: 2022
Publisher: IEEE
Date: 04-2019
Publisher: IEEE
Date: 08-2010
Publisher: SPIE
Date: 10-02-2011
DOI: 10.1117/12.872130
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 11-2013
Publisher: Elsevier BV
Date: 04-2017
DOI: 10.1016/J.MEDIA.2017.01.009
Abstract: We present an integrated methodology for detecting, segmenting and classifying breast masses from mammograms with minimal user intervention. This is a long standing problem due to low signal-to-noise ratio in the visualisation of breast masses, combined with their large variability in terms of shape, size, appearance and location. We break the problem down into three stages: mass detection, mass segmentation, and mass classification. For the detection, we propose a cascade of deep learning methods to select hypotheses that are refined based on Bayesian optimisation. For the segmentation, we propose the use of deep structured output learning that is subsequently refined by a level set method. Finally, for the classification, we propose the use of a deep learning classifier, which is pre-trained with a regression to hand-crafted feature values and fine-tuned based on the annotations of the breast mass classification dataset. We test our proposed system on the publicly available INbreast dataset and compare the results with the current state-of-the-art methodologies. This evaluation shows that our system detects 90% of masses at 1 false positive per image, has a segmentation accuracy of around 0.85 (Dice index) on the correctly detected masses, and overall classifies masses as malignant or benign with sensitivity (Se) of 0.98 and specificity (Sp) of 0.7.
Publisher: IEEE
Date: 11-2020
Publisher: Elsevier BV
Date: 02-2021
Publisher: ACM
Date: 15-08-2005
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 03-2012
Publisher: Springer International Publishing
Date: 2016
Publisher: Springer Berlin Heidelberg
Date: 2007
DOI: 10.1007/978-3-540-75759-7_69
Abstract: Automatic delineation and robust measurement of fetal anat-omical structures in 2D ultrasound images is a challenging task due to the complexity of the object appearance, noise, shadows, and quantity of information to be processed. Previous solutions rely on explicit encoding of prior knowledge and formulate the problem as a perceptual grouping task solved through clustering or variational approaches. These methods are known to be limited by the validity of the underlying assumptions and cannot capture complex structure appearances. We propose a novel system for fast automatic obstetric measurements by directly exploiting a large database of expert annotated fetal anatomical structures in ultrasound images. Our method learns to distinguish between the appearance of the object of interest and background by training a discriminative constrained probabilistic boosting tree classifier. This system is able to handle previously unsolved problems in this domain, such as the effective segmentation of fetal abdomens. We show results on fully automatic measurement of head circumference, biparietal diameter, abdominal circumference and femur length. Unparalleled extensive experiments show that our system is, on average, close to the accuracy of experts in terms of segmentation and obstetric measurements. Finally, this system runs under half second on a standard dual-core PC computer.
Publisher: Elsevier BV
Date: 07-2009
Publisher: IEEE
Date: 10-2018
Publisher: Springer International Publishing
Date: 2015
Publisher: Radiological Society of North America (RSNA)
Date: 03-2023
DOI: 10.1148/RYAI.220072
Publisher: Springer International Publishing
Date: 2016
Publisher: Springer International Publishing
Date: 2022
Publisher: IEEE
Date: 11-2021
Publisher: IEEE
Date: 06-2013
Publisher: IEEE
Date: 10-2008
Publisher: IEEE
Date: 24-10-2022
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 09-2008
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 03-2017
Publisher: IEEE
Date: 11-2020
Publisher: IEEE
Date: 2005
DOI: 10.1109/CRV.2005.53
Publisher: Optica Publishing Group
Date: 26-03-2021
DOI: 10.1364/PRJ.415902
Abstract: A new approach to optical fiber sensing is proposed and demonstrated that allows for specific measurement even in the presence of strong noise from undesired environmental perturbations. A deep neural network model is trained to statistically learn the relation of the complex optical interference output from a multimode optical fiber (MMF) with respect to a measurand of interest while discriminating the noise. This technique negates the need to carefully shield against, or compensate for, undesired perturbations, as is often the case for traditional optical fiber sensors. This is achieved entirely in software without any fiber postprocessing fabrication steps or specific packaging required, such as fiber Bragg gratings or specialized coatings. The technique is highly generalizable, whereby the model can be trained to identify any measurand of interest within any noisy environment provided the measurand affects the optical path length of the MMF’s guided modes. We demonstrate the approach using a sapphire crystal optical fiber for temperature sensing under strong noise induced by mechanical vibrations, showing the power of the technique not only to extract sensing information buried in strong noise but to also enable sensing using traditionally challenging exotic materials.
Publisher: Wiley
Date: 2021
DOI: 10.1111/JGH.15344
Publisher: IEEE
Date: 06-2022
Publisher: Springer International Publishing
Date: 2020
Publisher: Springer International Publishing
Date: 2018
Publisher: IEEE
Date: 03-2020
Publisher: ACM
Date: 02-04-2020
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 04-2004
Publisher: IEEE
Date: 09-2015
Publisher: IEEE
Date: 2023
Publisher: Springer Science and Business Media LLC
Date: 28-03-2020
Publisher: IEEE
Date: 04-2020
Publisher: Cold Spring Harbor Laboratory
Date: 06-2021
DOI: 10.1101/2021.05.28.21257892
Abstract: To assess the generalisability of a deep learning (DL) system for screening mammography developed at New York University (NYU), USA (1, 2) in a South Australian (SA) dataset. Clients with pathology-proven lesions (n=3,160) and age-matched controls (n=3,240) were selected from women screened at BreastScreen SA from January 2010 to December 2016 (n clients=207,691) and split into training, validation and test subsets (70%, 15%, 15% respectively). The primary outcome was area under the curve (AUC), in the SA Test Set 1 (SATS1), differentiating invasive breast cancer or ductal carcinoma in situ (n=469) from age-matched controls (n=490) and benign lesions (n=44). The NYU system was tested statically, after training without transfer learning (TL), after retraining with TL and without (NYU1) and with (NYU2) heatmaps. The static NYU1 model AUCs in the NYU test set (NYTS) and SATS1 were 83.0%(95%CI=82.4%-83.6%)(2) and 75.8%(95%CI=72.6%-78.8%), respectively. Static NYU2 AUCs in the NYTS and SATS1 were 88.6%(95%CI=88.3%-88.9%)(2) and 84.5%(95%CI=81.9%-86.8%), respectively. Training of NYU1 and NYU2 without TL achieved AUCs in the SATS1 of 65.8% (95%CI=62.2%-69.1%) and 85.9%(95%CI=83.5%-88.2%), respectively. Retraining of NYU1 and NYU2 with TL resulted in AUCs of 82.4%(95%CI=79.7-84.9%) and 86.3%(95%CI=84.0-88.5%) respectively. We did not fully reproduce the reported performance of NYU on a local dataset local retraining with TL approximated this level of performance. Optimising models for local clinical environments may improve performance. The generalisation of DL systems to new environments may be challenging. In this study, the original performance of deep learning models for screening mammography was reduced in an independent clinical population. Deep learning (DL) systems for mammography require local testing and may benefit from local retraining. An openly available DL system approximates human performance in an independent dataset. There are multiple potential sources of reduced deep learning system performance when deployed to a new dataset and population.
Publisher: BMJ
Date: 08-2020
DOI: 10.1136/BMJOPEN-2019-035446
Abstract: Despite global concerns about the safety and quality of health care, population-wide studies of hospital outcomes are uncommon. The SAFety, Effectiveness of care and Resource use among Australian Hospitals (SAFER Hospitals) study seeks to estimate the incidence of serious adverse events, mortality, unplanned rehospitalisations and direct costs following hospital encounters using nationwide data, and to assess the variation and trends in these outcomes. SAFER Hospitals is a cohort study with retrospective and prospective components. The retrospective component uses data from 2012 to 2018 on all hospitalised patients age ≥18 years included in each State and Territories’ Admitted Patient Collections. These routinely collected datasets record every hospital encounter from all public and most private hospitals using a standardised set of variables including patient demographics, primary and secondary diagnoses, procedures and patient status at discharge. The study outcomes are deaths, adverse events, readmissions and emergency care visits. Hospitalisation data will be linked to subsequent hospitalisations and each region’s Emergency Department Data Collections and Death Registries to assess readmissions, emergency care encounters and deaths after discharge. Direct hospital costs associated with adverse outcomes will be estimated using data from the National Cost Data Collection. Variation in these outcomes among hospitals will be assessed adjusting for differences in hospitals’ case-mix. The prospective component of the study will evaluate the temporal change in outcomes every 4 years from 2019 until 2030. Human Research Ethics Committees of the respective Australian states and territories provided ethical approval to conduct this study. A waiver of informed consent was granted for the use of de-identified patient data. Study findings will be disseminated via presentations at conferences and publications in peer-reviewed journals.
Publisher: Springer Science and Business Media LLC
Date: 10-05-2017
DOI: 10.1038/S41598-017-01931-W
Abstract: Precision medicine approaches rely on obtaining precise knowledge of the true state of health of an in idual patient, which results from a combination of their genetic risks and environmental exposures. This approach is currently limited by the lack of effective and efficient non-invasive medical tests to define the full range of phenotypic variation associated with in idual health. Such knowledge is critical for improved early intervention, for better treatment decisions, and for ameliorating the steadily worsening epidemic of chronic disease. We present proof-of-concept experiments to demonstrate how routinely acquired cross-sectional CT imaging may be used to predict patient longevity as a proxy for overall in idual health and disease status using computer image analysis techniques. Despite the limitations of a modest dataset and the use of off-the-shelf machine learning methods, our results are comparable to previous ‘manual’ clinical methods for longevity prediction. This work demonstrates that radiomics techniques can be used to extract biomarkers relevant to one of the most widely used outcomes in epidemiological and clinical research – mortality, and that deep learning with convolutional neural networks can be usefully applied to radiomics research. Computer image analysis applied to routinely collected medical images offers substantial potential to enhance precision medicine initiatives.
Publisher: Springer International Publishing
Date: 2018
Publisher: IEEE
Date: 2023
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 04-2023
Publisher: IEEE
Date: 09-2010
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 03-2007
Publisher: Cold Spring Harbor Laboratory
Date: 10-09-2020
DOI: 10.1101/2020.09.09.290577
Abstract: With the advent of increased ersity and scale of molecular data, there has been a growing appreciation for the applications of machine learning and statistical methodologies to gain new biological insights. An important step in achieving this aim is the Relation Extraction process which specifies if an interaction exists between two or more biological entities in a published study. Here, we employed natural-language processing (CBOW) and deep Recurrent Neural Network (bi-directional LSTM) to predict relations between biological entities that describe protein subcellular localisation in plants. We applied our system to 1700 published Arabidopsis protein subcellular studies from the SUBA manually curated dataset. The system was able to extract relevant text and the classifier predicted interactions between protein name, subcellular localisation and experimental methodology. It obtained a final precision, recall rate, accuracy and F1 scores of 0.951, 0.828, 0.893 and 0.884 respectively. The classifier was subsequently tested on a similar problem in crop species (CropPAL) and demonstrated a comparable accuracy measure (0.897). Consequently, our approach can be used to extract protein functional features from unstructured text in the literature with high accuracy. The developed system will improve dissemination or protein functional data to the scientific community and unlock the potential of big data text analytics for generating new hypotheses from erse datasets.
Publisher: Springer International Publishing
Date: 2021
Publisher: Elsevier BV
Date: 03-2021
Publisher: Springer International Publishing
Date: 2015
Publisher: Springer International Publishing
Date: 2021
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2020
Publisher: IEEE
Date: 04-2019
Publisher: Springer International Publishing
Date: 2021
Publisher: IEEE
Date: 09-2015
Publisher: SPIE
Date: 15-11-1999
DOI: 10.1117/12.369267
Publisher: IEEE
Date: 16-10-2022
Publisher: IEEE
Date: 06-2014
DOI: 10.1109/CVPR.2014.44
Publisher: Springer International Publishing
Date: 2021
Publisher: Springer International Publishing
Date: 2015
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2020
Publisher: IEEE
Date: 29-11-2020
Publisher: Wiley
Date: 24-02-2012
DOI: 10.1002/ETT.2508
Publisher: Springer Science and Business Media LLC
Date: 04-07-2021
Publisher: Springer Science and Business Media LLC
Date: 25-08-2007
Location: United Kingdom of Great Britain and Northern Ireland
Location: United States of America
Location: United States of America
Location: United States of America
Start Date: 2022
End Date: 2026
Funder: Australian Research Council
View Funded ActivityStart Date: 2019
End Date: 2022
Funder: Australian Research Council
View Funded ActivityStart Date: 2014
End Date: 2016
Funder: Australian Research Council
View Funded ActivityStart Date: 2018
End Date: 2020
Funder: Australian Research Council
View Funded ActivityStart Date: 2019
End Date: 2019
Funder: Australian Research Council
View Funded ActivityStart Date: 2010
End Date: 2013
Funder: Fundação para a Ciência e a Tecnologia, I.P.
View Funded ActivityStart Date: 2010
End Date: 2013
Funder: Fundação para a Ciência e a Tecnologia, I.P.
View Funded ActivityStart Date: 2020
End Date: 2023
Funder: Department of Health, Australian Government
View Funded ActivityStart Date: 2014
End Date: 2015
Funder: Alexander von Humboldt-Stiftung
View Funded ActivityStart Date: 2014
End Date: 2020
Funder: Australian Research Council
View Funded ActivityStart Date: 2016
End Date: 2016
Funder: Australian Research Council
View Funded ActivityStart Date: 04-2016
End Date: 12-2017
Amount: $250,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 2019
End Date: 12-2019
Amount: $726,921.00
Funder: Australian Research Council
View Funded ActivityStart Date: 03-2020
End Date: 02-2024
Amount: $988,200.00
Funder: Australian Research Council
View Funded ActivityStart Date: 07-2014
End Date: 03-2021
Amount: $19,000,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 03-2023
End Date: 03-2027
Amount: $3,975,864.00
Funder: Australian Research Council
View Funded ActivityStart Date: 12-2014
End Date: 12-2018
Amount: $295,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 10-2018
End Date: 12-2022
Amount: $387,884.00
Funder: Australian Research Council
View Funded Activity