ORCID Profile
0000-0003-2268-2092
Current Organisations
UNSW Sydney
,
Northeastern University
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
Publisher: MDPI AG
Date: 21-07-2022
DOI: 10.3390/APP12147314
Abstract: This paper proposes a novel pixel interval down-s ling network (PID-Net) for dense tiny object (yeast cells) counting tasks with higher accuracy. The PID-Net is an end-to-end convolutional neural network (CNN) model with an encoder–decoder architecture. The pixel interval down-s ling operations are concatenated with max-pooling operations to combine the sparse and dense features. This addresses the limitation of contour conglutination of dense objects while counting. The evaluation was conducted using classical segmentation metrics (the Dice, Jaccard and Hausdorff distance) as well as counting metrics. The experimental results show that the proposed PID-Net had the best performance and potential for dense tiny object counting tasks, which achieved 96.97% counting accuracy on the dataset with 2448 yeast cell images. By comparing with the state-of-the-art approaches, such as Attention U-Net, Swin U-Net and Trans U-Net, the proposed PID-Net can segment dense tiny objects with clearer boundaries and fewer incorrect debris, which shows the great potential of PID-Net in the task of accurate counting.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2020
Publisher: Elsevier BV
Date: 2023
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2020
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2020
Publisher: Springer Science and Business Media LLC
Date: 27-04-2021
Publisher: ACM
Date: 26-02-2021
Publisher: Elsevier BV
Date: 10-2022
Publisher: Elsevier BV
Date: 09-2021
DOI: 10.1016/J.COMPBIOMED.2021.104649
Abstract: Cervical cancer, one of the most common fatal cancers among women, can be prevented by regular screening to detect any precancerous lesions at early stages and treat them. Pap smear test is a widely performed screening technique for early detection of cervical cancer, whereas this manual screening method suffers from high false-positive results because of human errors. To improve the manual screening practice, machine learning (ML) and deep learning (DL) based computer-aided diagnostic (CAD) systems have been investigated widely to classify cervical Pap cells. Most of the existing studies require pre-segmented images to obtain good classification results. In contrast, accurate cervical cell segmentation is challenging because of cell clustering. Some studies rely on handcrafted features, which cannot guarantee the classification stage's optimality. Moreover, DL provides poor performance for a multiclass classification task when there is an uneven distribution of data, which is prevalent in the cervical cell dataset. This investigation has addressed those limitations by proposing DeepCervix, a hybrid deep feature fusion (HDFF) technique based on DL, to classify the cervical cells accurately. Our proposed method uses various DL models to capture more potential information to enhance classification performance. Our proposed HDFF method is tested on the publicly available SIPaKMeD dataset and compared the performance with base DL models and the late fusion (LF) method. For the SIPaKMeD dataset, we have obtained the state-of-the-art classification accuracy of 99.85%, 99.38%, and 99.14% for 2-class, 3-class, and 5-class classification. This method is also tested on the Herlev dataset and achieves an accuracy of 98.32% for 2-class and 90.32% for 7-class classification. The source code of the DeepCervix model is available at: github.com/Mamunur-20/DeepCervix.
Publisher: Elsevier BV
Date: 2022
Publisher: Springer Science and Business Media LLC
Date: 31-01-2022
Publisher: WIP
Date: 2018
Publisher: Springer Science and Business Media LLC
Date: 08-01-2022
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2020
Publisher: Elsevier BV
Date: 10-2020
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2020
Publisher: Hindawi Limited
Date: 26-06-2021
DOI: 10.1155/2021/6671417
Abstract: Gastric cancer is a common and deadly cancer in the world. The gold standard for the detection of gastric cancer is the histological examination by pathologists, where Gastric Histopathological Image Analysis (GHIA) contributes significant diagnostic information. The histopathological images of gastric cancer contain sufficient characterization information, which plays a crucial role in the diagnosis and treatment of gastric cancer. In order to improve the accuracy and objectivity of GHIA, Computer-Aided Diagnosis (CAD) has been widely used in histological image analysis of gastric cancer. In this review, the CAD technique on pathological images of gastric cancer is summarized. Firstly, the paper summarizes the image preprocessing methods, then introduces the methods of feature extraction, and then generalizes the existing segmentation and classification techniques. Finally, these techniques are systematically introduced and analyzed for the convenience of future researchers.
Publisher: Elsevier BV
Date: 10-2022
Publisher: Springer Science and Business Media LLC
Date: 29-09-2021
Publisher: IEEE
Date: 06-2018
Publisher: Elsevier BV
Date: 03-2022
DOI: 10.1016/J.COMPBIOMED.2021.105207
Abstract: Gastric cancer is the fifth most common cancer globally, and early detection of gastric cancer is essential to save lives. Histopathological examination of gastric cancer is the gold standard for the diagnosis of gastric cancer. However, computer-aided diagnostic techniques are challenging to evaluate due to the scarcity of publicly available gastric histopathology image datasets. In this paper, a noble publicly available Gastric Histopathology Sub-size Image Database (GasHisSDB) is published to identify classifiers' performance. Specifically, two types of data are included: normal and abnormal, with a total of 245,196 tissue case images. In order to prove that the methods of different periods in the field of image classification have discrepancies on GasHisSDB, we select a variety of classifiers for evaluation. Seven classical machine learning classifiers, three Convolutional Neural Network classifiers, and a novel transformer-based classifier are selected for testing on image classification tasks. This study performed extensive experiments using traditional machine learning and deep learning methods to prove that the methods of different periods have discrepancies on GasHisSDB. Traditional machine learning achieved the best accuracy rate of 86.08% and a minimum of just 41.12%. The best accuracy of deep learning reached 96.47% and the lowest was 86.21%. Accuracy rates vary significantly across classifiers. To the best of our knowledge, it is the first publicly available gastric cancer histopathology dataset containing a large number of images for weakly supervised learning. We believe that GasHisSDB can attract researchers to explore new algorithms for the automated diagnosis of gastric cancer, which can help physicians and patients in the clinical setting.
Publisher: Royal Society of Chemistry (RSC)
Date: 2023
DOI: 10.1039/D3NJ02844E
Abstract: An efficent heterogeneous bi-functional Cu-tuned phostotunstic acid reusable catalyst is developed for the selctive carbonylation of biomass-derived glycerol for boosted stable activity and higher selectivity of glycerol carbonate.
Publisher: Elsevier BV
Date: 05-2023
Publisher: Elsevier BV
Date: 03-2023
Publisher: Elsevier BV
Date: 04-2022
DOI: 10.1016/J.COMPBIOMED.2022.105265
Abstract: In recent years, colorectal cancer has become one of the most significant diseases that endanger human health. Deep learning methods are increasingly important for the classification of colorectal histopathology images. However, existing approaches focus more on end-to-end automatic classification using computers rather than human-computer interaction. In this paper, we propose an IL-MCAM framework. It is based on attention mechanisms and interactive learning. The proposed IL-MCAM framework includes two stages: automatic learning (AL) and interactivity learning (IL). In the AL stage, a multi-channel attention mechanism model containing three different attention mechanism channels and convolutional neural networks is used to extract multi-channel features for classification. In the IL stage, the proposed IL-MCAM framework continuously adds misclassified images to the training set in an interactive approach, which improves the classification ability of the MCAM model. We carried out a comparison experiment on our dataset and an extended experiment on the HE-NCT-CRC-100K dataset to verify the performance of the proposed IL-MCAM framework, achieving classification accuracies of 98.98% and 99.77%, respectively. In addition, we conducted an ablation experiment and an interchangeability experiment to verify the ability and interchangeability of the three channels. The experimental results show that the proposed IL-MCAM framework has excellent performance in the colorectal histopathological image classification tasks.
Publisher: Public Library of Science (PLoS)
Date: 12-05-2021
DOI: 10.1371/JOURNAL.PONE.0250631
Abstract: Environmental Microorganism Data Set Fifth Version (EMDS-5) is a microscopic image dataset including original Environmental Microorganism (EM) images and two sets of Ground Truth (GT) images. The GT image sets include a single-object GT image set and a multi-object GT image set. EMDS-5 has 21 types of EMs, each of which contains 20 original EM images, 20 single-object GT images and 20 multi-object GT images. EMDS-5 can realize to evaluate image preprocessing, image segmentation, feature extraction, image classification and image retrieval functions. In order to prove the effectiveness of EMDS-5, for each function, we select the most representative algorithms and price indicators for testing and evaluation. The image preprocessing functions contain two parts: image denoising and image edge detection. Image denoising uses nine kinds of filters to denoise 13 kinds of noises, respectively. In the aspect of edge detection, six edge detection operators are used to detect the edges of the images, and two evaluation indicators, peak-signal to noise ratio and mean structural similarity, are used for evaluation. Image segmentation includes single-object image segmentation and multi-object image segmentation. Six methods are used for single-object image segmentation, while k -means and U-net are used for multi-object segmentation. We extract nine features from the images in EMDS-5 and use the Support Vector Machine (SVM) classifier for testing. In terms of image classification, we select the VGG16 feature to test SVM, k -Nearest Neighbors, Random Forests. We test two types of retrieval approaches: texture feature retrieval and deep learning feature retrieval. We select the last layer of features of VGG16 network and ResNet50 network as feature vectors. We use mean average precision as the evaluation index for retrieval. EMDS-5 is available at the URL: github.com/NEUZihan/EMDS-5.git .
Publisher: Elsevier BV
Date: 02-2022
DOI: 10.1016/J.COMPBIOMED.2021.105026
Abstract: Cervical cancer is a very common and fatal type of cancer in women. Cytopathology images are often used to screen for this cancer. Given that there is a possibility that many errors can occur during manual screening, a computer-aided diagnosis system based on deep learning has been developed. Deep learning methods require a fixed dimension of input images, but the dimensions of clinical medical images are inconsistent. The aspect ratios of the images suffer while resizing them directly. Clinically, the aspect ratios of cells inside cytopathological images provide important information for doctors to diagnose cancer. Therefore, it is difficult to resize directly. However, many existing studies have resized the images directly and have obtained highly robust classification results. To determine a reasonable interpretation, we have conducted a series of comparative experiments. First, the raw data of the SIPaKMeD dataset are pre-processed to obtain standard and scaled datasets. Then, the datasets are resized to 224 × 224 pixels. Finally, 22 deep learning models are used to classify the standard and scaled datasets. The results of the study indicate that deep learning models are robust to changes in the aspect ratio of cells in cervical cytopathological images. This conclusion is also validated via the Herlev dataset.
No related grants have been discovered for Md Mamunur Rahaman.