4.7 Article

Effects of image data quality on a convolutional neural network trained in-tank fish detection model for recirculating aquaculture systems

期刊

出版社

ELSEVIER SCI LTD
DOI: 10.1016/j.compag.2023.107644

关键词

Underwater imaging; Artificial intelligence; Machine learning; RAS; Precision aquaculture

向作者/读者索取更多资源

Artificial intelligence can assist in fish production and management decisions in recirculating aquaculture systems. However, the accuracy of machine learning models in this context is dependent on the quality of input images, which face challenges due to underwater conditions. This study explores the effect of various factors on model accuracy for fish detection under RAS production conditions.
Artificial intelligence can answer fish production-related questions and assist growers with important management decisions in recirculating aquaculture systems (RAS). However, convolutional neural network-aided machine learning approaches are data-intensive, with model accuracy subject to the input image quality. Underwater imagery data acquisition, relatively high fish density, and water turbidity impart major challenges in acquiring high-quality imagery data. This study was conducted to investigate the effects of sensor selection, image quality, data size, imaging conditions, and pre-processing operations on the machine learning model accuracy for fish detection under RAS production conditions. An imaging platform (RASense1.0) was developed with four off-the-shelf sensors customized for underwater image acquisition. Data acquired from the imaging sensors under two light conditions (i.e., Ambient and Supplemental) were arranged in sets of 100 images and annotated as partial and whole fish. The annotated images were augmented and trained using a one-stage YOLOv5 model. There was a substantial improvement in mean average precision (mAP) and F1 score while increasing the size of the image datasets up to 700 images and 80 epochs. Similarly, image augmentation substantially improved model accuracy for smaller dataset models trained with less than 700 images. Beyond this, there was no improvement in mAP (similar to 86 %). Sensor selection significantly affected model precision, recall, and mAP; however, light conditions did not demonstrate a considerable effect on model accuracy. While comparing the performance of the one-stage YOLOv5 against a two-stage Faster R-CNN, both models performed similarly in terms of mAP scores; however, training time for the former was 6-14 times lower than the latter.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据