4.7 Article

Recognition and Positioning of Fresh Tea Buds Using YOLOv4-lighted

Journal

AGRICULTURE-BASEL
Volume 13, Issue 3, Pages -

Publisher

MDPI
DOI: 10.3390/agriculture13030518

Keywords

tea buds; YOLOv4; attention mechanism; intelligent recognition; depth filter; picking point

Categories

Ask authors/readers for more resources

This paper proposes a deep learning method based on YOLOv4 for the detection of tea buds and their picking points. The method improves detection accuracy through segmentation based on color and depth data. Various improvements are made to the YOLOv4 model to enhance detection accuracy. The proposed method shows promising results in tea bud detection and picking point prediction.
To overcome the low recognition accuracy, slow speed, and difficulty in locating the picking points of tea buds, this paper is concerned with the development of a deep learning method, based on the You Only Look Once Version 4 (YOLOv4) object detection algorithm, for the detection of tea buds and their picking points with tea-picking machines. The segmentation method, based on color and depth data from a stereo vision camera, is proposed to detect the shapes of tea buds in 2D and 3D spaces more accurately than using 2D images. The YOLOv4 deep learning model for object detection was modified to obtain a lightweight model with a shorter inference time, called YOLOv4-lighted. Then, Squeeze-and-Excitation Networks (SENet), Efficient Channel Attention (ECA), Convolutional Block Attention Module (CBAM), and improved CBAM (ICBAM) were added to the output layer of the feature extraction network, for improving the detection accuracy of tea features. Finally, the Path Aggregation Network (PANet) in the neck network was simplified to the Feature Pyramid Network (FPN). The light-weighted YOLOv4 with ICBAM, called YOLOv4-lighted + ICBAM, was determined as the optimal recognition model for the detection of tea buds in terms of accuracy (94.19%), recall (93.50%), F1 score (0.94), and average precision (97.29%). Compared with the baseline YOLOv4 model, the size of the YOLOv4-lighted + ICBAM model decreased by 75.18%, and the frame rate increased by 7.21%. In addition, the method for predicting the picking point of each detected tea bud was developed by segmentation of the tea buds in each detected bounding box, with filtering of each segment based on its depth from the camera. The test results showed that the average positioning success rate and the average positioning time were 87.10% and 0.12 s, respectively. In conclusion, the recognition and positioning method proposed in this paper provides a theoretical basis and method for the automatic picking of tea buds.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available