4.7 Article

Insect recognition based on complementary features from multiple views

期刊

SCIENTIFIC REPORTS
卷 13, 期 1, 页码 -

出版社

NATURE PORTFOLIO
DOI: 10.1038/s41598-023-29600-1

关键词

-

向作者/读者索取更多资源

Insect pest recognition is an important field in agriculture and ecology, but the slight variations in appearance among different insect species make it difficult for human experts to recognize them. Therefore, the use of machine learning methods for insect recognition is becoming increasingly important. In this study, we proposed a feature fusion network that combines feature representations from different backbone models. We used CNN-based ResNet, attention-based Vision Transformer, and Swin Transformer backbones to localize important insect image regions. We also developed an attention-selection mechanism that integrates these important regions to reconstruct attention areas and improve insect recognition.
Insect pest recognition has always been a significant branch of agriculture and ecology. The slight variance among different kinds of insects in appearance makes it hard for human experts to recognize. It is increasingly imperative to finely recognize specific insects by employing machine learning methods. In this study, we proposed a feature fusion network to synthesize feature presentations in different backbone models. Firstly, we employed one CNN-based backbone ResNet, and two attention-based backbones Vision Transformer and Swin Transformer to localize the important regions of insect images with Grad-CAM. During this process, we designed new architectures for these two Transformers to enable Grad-CAM to be applicable in such attention-based models. Then we further proposed an attention-selection mechanism to reconstruct the attention area by delicately integrating the important regions, enabling these partial but key expressions to complement each other. We only need part of the image scope that represents the most crucial decision-making information for insect recognition. We randomly selected 20 species of insects from the IP102 dataset and then adopted all 102 kinds of insects to test the classification performance. Experimental results show that the proposed approach outperforms other advanced CNN-based models. More importantly, our attention-selection mechanism demonstrates good robustness to augmented images.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据