期刊
COMPUTERS AND ELECTRONICS IN AGRICULTURE
卷 194, 期 -, 页码 -出版社
ELSEVIER SCI LTD
DOI: 10.1016/j.compag.2021.106679
关键词
Chrysanthemum phenotype; Transfer learning; Fine-grained classification; Bilinear; Convolutional neural network
资金
- Jiangsu Agriculture Science and Technology Innovation Fund (JASTIF) [SCX (21) 3059]
- Shanghai Big Data Management System Engineering Research Centre Open Fund [HYSY21022]
- College Students Entrepreneurship Training Program [S20190025]
- [(21) 3059]
This paper proposes a chrysanthemum image phenotype classification framework based on transfer learning and bilinear Convolutional Neural Network. The experimental result proves that our proposal can achieve better performance and lower loss compared with other network models.
As one of the largest families of flowering plants, chrysanthemum has a wide variety and variation. There exists small phenotypic differences between classes, leading to the fact that classifying the class of chrysanthemum is a challenging fine-grained image classification problem. The fine-grained image classification task based on deep learning often concerns a large number of parameters and requires a long training time. For tackling these issues, this paper proposes a chrysanthemum image phenotype classification framework based on transfer learning and bilinear Convolutional Neural Network. After pre-processing the images, the symmetric VGG16 network is adopted as a feature extractor. The pre-trained parameters are then transferred to the proposed framework, which is divided into two stages for training the full connection layer and fine tuning all layers respectively. The phenotypic features of chrysanthemum output from the two networks are transposed and multiplied. Last the global features are input into the classification layer for classification. In this paper, a total of eight methods, including other bilinear network models, non-transfer learning and non-bilinear models, are compared with our proposal. The experimental result proves that our proposal can achieve better performance and lower loss, compared with other network models, by reaching at an accuracy rate of 0.9815 and a recall rate of 0.9800.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据