ORCID Profile
0000-0002-5276-3793
Current Organisation
Charles Darwin University
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
Publisher: MDPI AG
Date: 07-11-2022
DOI: 10.3390/BIOMEDICINES10112835
Abstract: Heart disease can be life-threatening if not detected and treated at an early stage. The electrocardiogram (ECG) plays a vital role in classifying cardiovascular diseases, and often physicians and medical researchers examine paper-based ECG images for cardiac diagnosis. An automated heart disease prediction system might help to classify heart diseases accurately at an early stage. This study aims to classify cardiac diseases into five classes with paper-based ECG images using a deep learning approach with the highest possible accuracy and the lowest possible time complexity. This research consists of two approaches. In the first approach, five deep learning models, InceptionV3, ResNet50, MobileNetV2, VGG19, and DenseNet201, are employed. In the second approach, an integrated deep learning model (InRes-106) is introduced, combining InceptionV3 and ResNet50. This model is developed as a deep convolutional neural network capable of extracting hidden and high-level features from images. An ablation study is conducted on the proposed model altering several components and hyperparameters, improving the performance even further. Before training the model, several image pre-processing techniques are employed to remove artifacts and enhance the image quality. Our proposed hybrid InRes-106 model performed best with a testing accuracy of 98.34%. The InceptionV3 model acquired a testing accuracy of 90.56%, the ResNet50 89.63%, the DenseNet201 88.94%, the VGG19 87.87%, and the MobileNetV2 achieved 80.56% testing accuracy. The model is trained with a k-fold cross-validation technique with different k values to evaluate the robustness further. Although the dataset contains a limited number of complex ECG images, our proposed approach, based on various image pre-processing techniques, model fine-tuning, and ablation studies, can effectively diagnose cardiac diseases.
Publisher: MDPI AG
Date: 17-12-2021
Abstract: Background: Identification and treatment of breast cancer at an early stage can reduce mortality. Currently, mammography is the most widely used effective imaging technique in breast cancer detection. However, an erroneous mammogram based interpretation may result in false diagnosis rate, as distinguishing cancerous masses from adjacent tissue is often complex and error-prone. Methods: Six pre-trained and fine-tuned deep CNN architectures: VGG16, VGG19, MobileNetV2, ResNet50, DenseNet201, and InceptionV3 are evaluated to determine which model yields the best performance. We propose a BreastNet18 model using VGG16 as foundational base, since VGG16 performs with the highest accuracy. An ablation study is performed on BreastNet18, to evaluate its robustness and achieve the highest possible accuracy. Various image processing techniques with suitable parameter values are employed to remove artefacts and increase the image quality. A total dataset of 1442 preprocessed mammograms was augmented using seven augmentation techniques, resulting in a dataset of 11,536 images. To investigate possible overfitting issues, a k-fold cross validation is carried out. The model was then tested on noisy mammograms to evaluate its robustness. Results were compared with previous studies. Results: Proposed BreastNet18 model performed best with a training accuracy of 96.72%, a validating accuracy of 97.91%, and a test accuracy of 98.02%. In contrast to this, VGGNet19 yielded test accuracy of 96.24%, MobileNetV2 77.84%, ResNet50 79.98%, DenseNet201 86.92%, and InceptionV3 76.87%. Conclusions: Our proposed approach based on image processing, transfer learning, fine-tuning, and ablation study has demonstrated a high correct breast cancer classification while dealing with a limited number of complex medical images.
Publisher: MDPI AG
Date: 30-06-2023
DOI: 10.3390/BIOMEDICINES11071874
Abstract: Bronchiectasis in children can progress to a severe lung condition if not diagnosed and treated early. The radiological diagnostic criteria for the diagnosis of bronchiectasis is an increased broncho-arterial (BA) ratio. From high-resolution computed tomography (HRCT) scans, the BA pairs must be detected first to derive the BA ratio. This study aims to identify potential BA pairs from HRCT scans of children undertaken to evaluate suppurative lung disease through an automated approach. After segmenting the lung regions, the HRCT scans are cleaned using a histogram analysis-based approach followed by a potential arteries identification process comprising four conditions based on imaging features. Potential arteries and their connected components are extracted, and potential bronchi are identified. Finally, the coordinates of potential arteries and potential bronchi are matched as the last step of BA pairs extraction. A total of 8–50 BA pairs are detected for each patient. Additionally, the area and several diameters of the bronchi and arteries are measured, and BA ratios based on these are calculated. Through this approach, the BA pairs of a CT scan datasets are detected and utilizing a deep learning model, a high classification test accuracy of 98.53% is achieved, validating the robustness of the proposed BA detection approach. The results show that visible BA pairs can be identified and segmented automatically, and the BA ratio calculated may help diagnose bronchiectasis with less effort and time.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2023
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2022
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2023
Publisher: Frontiers Media SA
Date: 16-08-2022
Abstract: Interpretation of medical images with a computer-aided diagnosis (CAD) system is arduous because of the complex structure of cancerous lesions in different imaging modalities, high degree of resemblance between inter-classes, presence of dissimilar characteristics in intra-classes, scarcity of medical data, and presence of artifacts and noises. In this study, these challenges are addressed by developing a shallow convolutional neural network (CNN) model with optimal configuration performing ablation study by altering layer structure and hyper-parameters and utilizing a suitable augmentation technique. Eight medical datasets with different modalities are investigated where the proposed model, named MNet-10, with low computational complexity is able to yield optimal performance across all datasets. The impact of photometric and geometric augmentation techniques on different datasets is also evaluated. We selected the mammogram dataset to proceed with the ablation study for being one of the most challenging imaging modalities. Before generating the model, the dataset is augmented using the two approaches. A base CNN model is constructed first and applied to both the augmented and non-augmented mammogram datasets where the highest accuracy is obtained with the photometric dataset. Therefore, the architecture and hyper-parameters of the model are determined by performing an ablation study on the base model using the mammogram photometric dataset. Afterward, the robustness of the network and the impact of different augmentation techniques are assessed by training the model with the rest of the seven datasets. We obtain a test accuracy of 97.34% on the mammogram, 98.43% on the skin cancer, 99.54% on the brain tumor magnetic resonance imaging (MRI), 97.29% on the COVID chest X-ray, 96.31% on the tympanic membrane, 99.82% on the chest computed tomography (CT) scan, and 98.75% on the breast cancer ultrasound datasets by photometric augmentation and 96.76% on the breast cancer microscopic biopsy dataset by geometric augmentation. Moreover, some elastic deformation augmentation methods are explored with the proposed model using all the datasets to evaluate their effectiveness. Finally, VGG16, InceptionV3, and ResNet50 were trained on the best-performing augmented datasets, and their performance consistency was compared with that of the MNet-10 model. The findings may aid future researchers in medical data analysis involving ablation studies and augmentation techniques.
Publisher: Elsevier BV
Date: 11-2022
Publisher: Public Library of Science (PLoS)
Date: 04-08-2022
DOI: 10.1371/JOURNAL.PONE.0269826
Abstract: The complex feature characteristics and low contrast of cancer lesions, a high degree of inter-class resemblance between malignant and benign lesions, and the presence of various artifacts including hairs make automated melanoma recognition in dermoscopy images quite challenging. To date, various computer-aided solutions have been proposed to identify and classify skin cancer. In this paper, a deep learning model with a shallow architecture is proposed to classify the lesions into benign and malignant. To achieve effective training while limiting overfitting problems due to limited training data, image preprocessing and data augmentation processes are introduced. After this, the ‘box blur’ down-scaling method is employed, which adds efficiency to our study by reducing the overall training time and space complexity significantly. Our proposed shallow convolutional neural network (SCNN_12) model is trained and evaluated on the Kaggle skin cancer data ISIC archive which was augmented to 16485 images by implementing different augmentation techniques. The model was able to achieve an accuracy of 98.87% with optimizer Adam and a learning rate of 0.001. In this regard, parameter and hyper-parameters of the model are determined by performing ablation studies. To assert no occurrence of overfitting, experiments are carried out exploring k-fold cross-validation and different dataset split ratios. Furthermore, to affirm the robustness the model is evaluated on noisy data to examine the performance when the image quality gets corrupted.This research corroborates that effective training for medical image analysis, addressing training time and space complexity, is possible even with a lightweighted network using a limited amount of training data.
Publisher: MDPI AG
Date: 05-01-2023
DOI: 10.3390/BIOMEDICINES11010133
Abstract: Current research indicates that for the identification of lung disorders, comprising pneumonia and COVID-19, structural distortions of bronchi and arteries (BA) should be taken into account. CT scans are an effective modality to detect lung anomalies. However, anomalies in bronchi and arteries can be difficult to detect. Therefore, in this study, alterations of bronchi and arteries are considered in the classification of lung diseases. Four approaches to highlight these are introduced: (a) a Hessian-based approach, (b) a region-growing algorithm, (c) a clustering-based approach, and (d) a color-coding-based approach. Prior to this, the lungs are segmented, employing several image preprocessing algorithms. The utilized COVID-19 Lung CT scan dataset contains three classes named Non-COVID, COVID, and community-acquired pneumonia, having 6983, 7593, and 2618 s les, respectively. To classify the CT scans into three classes, two deep learning architectures, (a) a convolutional neural network (CNN) and (b) a CNN with long short-term memory (LSTM) and an attention mechanism, are considered. Both these models are trained with the four datasets achieved from the four approaches. Results show that the CNN model achieved test accuracies of 88.52%, 87.14%, 92.36%, and 95.84% for the Hessian, the region-growing, the color-coding, and the clustering-based approaches, respectively. The CNN with LSTM and an attention mechanism model results in an increase in overall accuracy for all approaches with an 89.61%, 88.28%, 94.61%, and 97.12% test accuracy for the Hessian, region-growing, color-coding, and clustering-based approaches, respectively. To assess overfitting, the accuracy and loss curves and k-fold cross-validation technique are employed. The Hessian-based and region-growing algorithm-based approaches produced nearly equivalent outcomes. Our proposed method outperforms state-of-the-art studies, indicating that it may be worthwhile to pay more attention to BA features in lung disease classification based on CT images.
No related grants have been discovered for Sidratul Montaha.