4.7 Article

Performance of deep learning models for classifying and detecting common weeds in corn and soybean production systems

期刊

出版社

ELSEVIER SCI LTD
DOI: 10.1016/j.compag.2021.106081

关键词

Site-Specific Weed Management; Weed identification; Image classification; Object detection

资金

  1. Purdue University's SURF program
  2. Foundation of Food and Agricultural Research [534662]
  3. USDA National Institute of Food and Agriculture Hatch project [1012501]
  4. Department of Agricultural and Biological Engineering at Purdue University
  5. Wabash Heartland Innovation Network grant [18024589]

向作者/读者索取更多资源

This study evaluates the performance of three different pre-trained image classification models for classifying early season weeds and assesses an object detection model for locating and identifying weed species. The results show VGG16 as the best performing image classification model, with PyTorch showing faster training times and higher accuracies. The object detection model can locate and identify multiple weeds within a single image.
Knowing precise location and having accurate information about weed species is a prerequisite for developing an effective site-specific weed management (SSWM) system. Due to the effectiveness of deep learning techniques for vision-based tasks such as image classification and object detection, its use for discriminating between weeds and crops is gaining acceptance among the agricultural research community. However, limited studies have used deep learning for identifying multiple weeds in a single image and most of the studies have not compared the effectiveness of deep learning based image classification and object detection by using a common, annotated imagery dataset of early season weeds under field conditions. This study addresses the research gap by evaluating comparative performance of three different pre-trained image classification models for classifying weed species and also assesses the performance of an object detection model for locating and identifying weed species. The image classification models were trained on two commonly used deep learning frameworks i.e., Keras and PyTorch, to assess any performance differential due to the choice of framework. An annotated dataset comprising of RGB images of four, early season weeds, found in corn and soybean production system in the Midwest US, namely, cocklebur (Xanthium strumarium), foxtail (Setaria viridis), redroot pigweed (Amaranthus retroflexus), and giant ragweed (Ambrosia trifida) was used in this study. VGG16, ResNet50, and InceptionV3 pre-trained models were used for image classification. The object detection model, based on the You Only Look Once (YOLOv3) library, was trained to locate and identify different weed species within an image. The performance of image classification models was assessed using testing accuracy and F1-score metrics. Average precision (AP) and mean average precision (mAP) were used to assess the performance of the object detection model. The best performing image classification model was VGG16 with an accuracy of 98.90% and an F1-score of 99%. Faster training times and higher accuracies were observed with PyTorch. The detection model helped locate and identify multiple weeds within an image with AP scores of 43.28%, 26.30%, 89.89%, and 57.80% for cocklebur, foxtail, redroot pigweed, and giant ragweed respectively and an overall mAP score of 54.3%. The results suggest that under field conditions, use of pre-trained models for image classification and YOLOv3 for object detection are promising for identifying single and multiple weeds, respectively, given that sufficient data is available. Additionally, unlike image classification, the localization capabilities of object detection are desirable for developing a system for SSWM.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据