4.6 Article

Explicitly Incorporating Spatial Information to Recurrent Networks for Agriculture

期刊

IEEE ROBOTICS AND AUTOMATION LETTERS
卷 7, 期 4, 页码 10017-10024

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/LRA.2022.3188105

关键词

Agricultural robots; computer vision; deep learning; smart agriculture

类别

资金

  1. Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy [EXC 2070 -390732324]

向作者/读者索取更多资源

In agriculture, most vision systems focus on still image classification. However, recent research has shown that spatial and temporal cues have the potential to enhance classification performance. This letter introduces novel approaches that explicitly capture spatial and temporal information to improve the classification of deep convolutional neural networks. By utilizing RGB-D images and robot odometry, the inter-frame feature map spatial registration is performed, and this information is integrated into recurrent deep learned models to enhance their accuracy and robustness. The results demonstrate a substantial improvement in classification performance using our best spatial-temporal model.
In agriculture, the majority of vision systems perform still image classification. Yet, recent work has highlighted the potential of spatial and temporal cues as a rich source of information to improve the classification performance. In this letter, we propose novel approaches to explicitly capture both spatial and temporal information to improve the classification of deep convolutional neural networks. We leverage available RGB-D images and robot odometry to perform inter-frame feature map spatial registration. This information is then fused within recurrent deep learnt models, to improve their accuracy and robustness. We demonstrate that this can considerably improve the classification performance with our best performing spatial-temporal model (ST-Atte) achieving absolute performance improvements for intersection-over-union (IoU[%]) of 4.7 for crop-weed segmentation and 2.6 for fruit (sweet pepper) segmentation. Furthermore, we show that these approaches are robust to variable framerates and odometry errors, which are frequently observed in real-world applications.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据