ORCID Profile
0000-0001-6973-019X
Current Organisation
James Cook University
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
Publisher: Springer Science and Business Media LLC
Date: 30-08-2021
DOI: 10.1038/S41598-021-96610-2
Abstract: Estimating fish body measurements like length, width, and mass has received considerable research due to its potential in boosting productivity in marine and aquaculture applications. Some methods are based on manual collection of these measurements using tools like a ruler which is time consuming and labour intensive. Others rely on fully-supervised segmentation models to automatically acquire these measurements but require collecting per-pixel labels which are also time consuming. It can take up to 2 minutes per fish to acquire accurate segmentation labels. To address this problem, we propose a segmentation model that can efficiently train on images labeled with point-level supervision, where each fish is annotated with a single click. This labeling scheme takes an average of only 1 second per fish. Our model uses a fully convolutional neural network with one branch that outputs per-pixel scores and another that outputs an affinity matrix. These two outputs are aggregated using a random walk to get the final, refined per-pixel output. The whole model is trained end-to-end using the localization-based counting fully convolutional neural network (LCFCN) loss and thus we call our method Affinity-LCFCN (A-LCFCN). We conduct experiments on the DeepFish dataset, which contains several fish habitats from north-eastern Australia. The results show that A-LCFCN outperforms a fully-supervised segmentation model when the annotation budget is fixed. They also show that A-LCFCN achieves better segmentation results than LCFCN and a standard baseline.
Publisher: James Cook University
Date: 2020
DOI: 10.25903/TRB0-S150
Publisher: Wiley
Date: 15-04-2022
DOI: 10.1111/FAF.12666
Abstract: Marine scientists use remote underwater image and video recording to survey fish species in their natural habitats. This helps them get a step closer towards understanding and predicting how fish respond to climate change, habitat degradation and fishing pressure. This information is essential for developing sustainable fisheries for human consumption, and for preserving the environment. However, the enormous volume of collected videos makes extracting useful information a daunting and time‐consuming task for a human being. A promising method to address this problem is the cutting‐edge deep learning (DL) technology. DL can help marine scientists parse large volumes of video promptly and efficiently, unlocking niche information that cannot be obtained using conventional manual monitoring methods. In this paper, we first provide a survey of computer visions (CVs) and DL studies conducted between 2003 and 2021 on fish classification in underwater habitats. We then give an overview of the key concepts of DL, while analysing and synthesizing DL studies. We also discuss the main challenges faced when developing DL for underwater image processing and propose approaches to address them. Finally, we provide insights into the marine habitat monitoring research domain and shed light on what the future of DL for underwater image processing may hold. This paper aims to inform marine scientists who would like to gain a high‐level understanding of essential DL concepts and survey state‐of‐the‐art DL‐based fish classification in their underwater habitat.
Publisher: Scientific Research Publishing, Inc.
Date: 2018
Publisher: Springer Science and Business Media LLC
Date: 04-09-2020
DOI: 10.1038/S41598-020-71639-X
Abstract: Visual analysis of complex fish habitats is an important step towards sustainable fisheries for human consumption and environmental protection. Deep Learning methods have shown great promise for scene analysis when trained on large-scale datasets. However, current datasets for fish analysis tend to focus on the classification task within constrained, plain environments which do not capture the complexity of underwater fish habitats. To address this limitation, we present DeepFish as a benchmark suite with a large-scale dataset to train and test methods for several computer vision tasks. The dataset consists of approximately 40 thousand images collected underwater from 20 habitats in the marine-environments of tropical Australia. The dataset originally contained only classification labels. Thus, we collected point-level and segmentation labels to have a more comprehensive fish analysis benchmark. These labels enable models to learn to automatically monitor fish count, identify their locations, and estimate their sizes. Our experiments provide an in-depth analysis of the dataset characteristics, and the performance evaluation of several state-of-the-art approaches based on our benchmark. Although models pre-trained on ImageNet have successfully performed on this benchmark, there is still room for improvement. Therefore, this benchmark serves as a testbed to motivate further development in this challenging domain of underwater computer vision.
Publisher: Elsevier BV
Date: 03-2024
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 10-2021
Publisher: IEEE
Date: 07-2019
Publisher: IEEE
Date: 12-2019
No related grants have been discovered for Alzayat Saleh.