4.6 Article

Beet seedling and weed recognition based on convolutional neural network and multi-modality images

期刊

MULTIMEDIA TOOLS AND APPLICATIONS
卷 81, 期 4, 页码 5239-5258

出版社

SPRINGER
DOI: 10.1007/s11042-021-11764-5

关键词

Object detection; Beets and weeds; Multi-modality images; Deformable convolution; Deep learning

资金

  1. Priority Academic Program Development of Jiangsu Higher Education Institutions [PAPD-2018-87]
  2. Synergistic Innovation Center of Jiangsu Modern Agricultural Equipment and Technology [4091600002]
  3. Project of Faculty of Agricultural Equipment of Jiangsu University [4121680001]

向作者/读者索取更多资源

This study proposed a novel depth fusion algorithm based on visible and near-infrared imagery to improve the recognition of beet seedlings and weeds. By utilizing an improved region-based fully convolutional network (R-FCN) model, deformable convolution, and online hard example mining, the average precision of the optimal model was significantly enhanced. The study can serve as a theoretical basis for the development of intelligent weed control robots under weak light conditions.
Difficulties in the recognition of beet seedlings and weeds can arise from a complex background in the natural environment and a lack of light at night. In the current study, a novel depth fusion algorithm was proposed based on visible and near-infrared imagery. In particular, visible (RGB) and near-infrared images were superimposed at the pixel-level via a depth fusion algorithm and were subsequently fused into three-channel multi-modality images in order to characterize the edge details of beets and weeds. Moreover, an improved region-based fully convolutional network (R-FCN) model was applied in order to overcome the geometric modeling restriction of traditional convolutional kernels. More specifically, for the convolutional feature extraction layers, deformable convolution was adopted to replace the traditional convolutional kernel, allowing for the entire network to extract more precise features. In addition, online hard example mining was introduced to excavate the hard negative samples in the detection process for the retraining of misidentified samples. A total of four models were established via the aforementioned improved methods. Results demonstrate that the average precision of the improved optimal model for beets and weeds were 84.8% and 93.2%, respectively, while the mean average precision was improved to 89.0%. Compared with the classical R-FCN model, the performance of the optimal model was not only greatly improved, but the parameters were also not significantly expanded. Our study can provide a theoretical basis for the subsequent development of intelligent weed control robots under weak light conditions.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据