4.6 Article

Identification and Classification of Mechanical Damage During Continuous Harvesting of Root Crops Using Computer Vision Methods

期刊

IEEE ACCESS
卷 10, 期 -, 页码 28885-28894

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/ACCESS.2022.3157619

关键词

Convolutional neural networks; blurred image classification; defect identification; fast detection; machine learning; YOLOv4-tiny

向作者/读者索取更多资源

This study investigates the use of machine learning methods to detect mechanical damage in sugar beetroot crops for fine-tuning beet harvester units. The Agrifac HEXX TRAXX harvester with a computer vision system was utilized. Image processing and classification methods were applied to accurately identify damage in sugar beetroots.
Detecting sugar beetroot crops with mechanical damage using machine learning methods is necessary for fine-tuning beet harvester units. The Agrifac HEXX TRAXX harvester with an installed computer vision system was investigated. A video camera (24 fps) was installed above the turbine, which receives the dug-out beets after the digger and is connected to a single-board computer. At the preprocessing stage, static and insignificant image details were revealed. Canny edge detector and excess green minus excess red (ExGR) method were used. The identified areas were excluded from the image. The remaining areas were glued with similar areas of another image. As a result, the number of images entering the second stage of preprocessing was reduced by half. Then Otsu's binarization was used. The main stage of image processing is divided into two sub-stages: detection and classification. The improved YOLOv4-tiny method was chosen for root crop detection using a single-board computer (SBC). This method allows processing up to 14 images of 416 x 416 pixels with 86% precision and 91% recall. To classify root crop damage, we considered two algorithms as candidates: 1. bag of visual words (BoVW) with a support vector machine (SVM) classifier using histogram of oriented gradients (HOG) and scale-invariant feature transform (SIFT) descriptors; 2. convolutional neural networks (CNN). Under normal lighting conditions, CNN showed the best accuracy, which was 99%. The implemented methods were used to detect and classify blurred images of sugar beetroots, which were previously rejected. For improved YOLOv4-tiny precision was 74% and recall was 70%. CNN classification accuracy was 92.6%.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据